Class Summary: M S. Raunak
The last decade has seen a super-exponential growth of the World Wide Web. The demand of bandwidth has been consistently increasing faster than the resource. Researchers have looked for alternative solutions to improve response time and reduce bandwidth consumption in the Internet. Caching has been well accepted as a viable method for ease the ever growing bandwidth need and also to improve the speed of information delivery. However, single point caching has limitation regarding scalability. Cooperative caching, where cache servers support each other in serving requests for cached objects, has emerged as an approach to overcome the limitation.
In the class we discussed some studies done on cooperative caching architectures and protocols. Our focus was caching in today's Web environment. We have discussed the papers by grouping them in terms of their approaches to achieve cooperation.
For the purpose of our discussion, we defined Hierarchical caching as the architecture where caches are placed at multiple levels of the network. On the other hand, in distributed caching architecture, caches are placed at the bottom levels of the network and there are no intermediate caches.
Cooperative caching was first studied in the file system environment. Some of the concepts from those studies were later used in the Web. We have provided some of those studies in this report. We have described caching architectures that are hierarchical, distributed or cluster based. We have also described communication protocols between cache servers in a cooperative environment.
The ARPA funded harvest project was one of the earliest implementation of hierarchical caching. In Harvest, caches were organized in a tree like fashion. The cache would send query datagrams to each neighbor and parent, plus an ICMP echo to an object's home site, and choose the fastest responding server from which to retrieve the data. The cache returns data to its clients as soon as the first few bytes of an object came into the cache.
The Summary Cache paper was discussed in detail in class. In Summary Cache, each proxy keeps a compact summary of the cache directory of every other cooperating proxy. When a cache miss occurs, a proxy first probes all the summaries to see if the request might be a cache hit in other proxies, and sends a query messages only to those proxies whose summaries show promising results. Like Cache Digests (another very similar approach for sharing compact directroy information), the summary is not accurate all the times. A false hit occurs when the summary indicates a cache hit where as the object is not actually in the cache. In this case, the penalty is a wasted query message. When the request is a cache hit but the summary indicates otherwise, a false miss occurs. The penalty of a false miss is higher miss ratio.
Both Cache Digests and Summary Cache store the summary using bloom. This is a hash-based probabilistic scheme that can represent a set of keys (URLs in this case) with minimal memory requirement. Bloom filter can answer membership queries with zero probability for false negatives and low probability for false positives.
We also discussed about a centrally managed cache cooperation architecture called CRISP. A CRISP cache consists of a group of cooperating caching servers sharing a central directory of cached objects. In this study, ; the authors argue that such a central structure is not necessarily warrant poor performance.
CRISP servers cooperate to share their caches, using a central mapping service with a complete directory of the cache contents of all participating proxies. Each client is bound to one of several caching servers or proxies, which cache objects on behalf of their clients. Any URL fetch request that misses in a client's private browser cache is sent to the local proxy. If the requested object is cached at any proxy in the cache, the object is delivered from that cache without accessing its source site. To probe the cooperative cache, the proxy forwards the requested URL to a mapping server. The mapping service maintains a complete cache directory. Proxies notify the mapping service any time they add or remove an object from the cache. The updates and the probing are done by exchanging unicast messages. If the map in the central server indicates that a requested object is resident in a peer, the requesting proxy retrieves the object directly from that peer, and returns it to the client.
Although CRISP prototype implementation was built on top of Harvest cache, it doesn't have a hierarchical structure. However, it can be used along with hierarchical caching. For example, CRISP caches can be used to flatten the lowest levels of a hierarchical cache, or to incrementally expand capacity at any level of a hierarchy.
With increased bandwidth availability, more bandwidth intensive applications (real time video, music and online games) will be run over the Web. These new applications are likely to deteriorate the already overloaded situation of the Internet. Research needs to be done to find out how cooperative caching can improve such bandwidth intensive applications. Consistency is also a big issue in cooperative caching. This issue needs to be addressed in greater detail.
Caching is a well accepted method for reducing access time and saving bandwidth to compensate the exponentially growing demand of the Web. However, single point caching has it limitation. Almost all the studies found that caches perform better when they cooperate. However, cooperation comes with the cost of maintaining states and communicating with each other. The performance of cooperating caching depends on the better communication among the cache server incurring the minimum cost. An ideal solution will enable every cache to cooperate with every other one when it benefits to do so with minimum cost.
Reviewer: by Zhenlin Wang
This paper presents a new protocol, Summary Cache Enhanced Internet Cache Protocol. As the authors pointed out, in the ICP protocol, whenever a cache miss occurs, the proxy multicasts a query message to all other proxies. So the ICP protocol suffers huge overhead on inter-proxy message passing and thus results in the protocol's poor scalability. The new protocol attack this problem through the summaries on proxies. Each proxy keeps a summary of the cache directory of every other proxy. Rather than send requests to all other proxies, under the new protocol, the proxy sends query message only to those promising proxies indicated by summaries. The idea of summaries here is similar to the hints talked in Efficient Cooperative Caching using Hints. In summary cache, you can find the location of a document by looking up your local summary. You go to sever if there is a miss or false hit. In the hint-based cooperative cache, the hints in fact build a `probe list'' of a cache block location, which means you probably need look up the hints in several caches to hit a block or finally miss it.
Two key issues affecting scalability are the update frequency and the size of summaries. In the new scalable protocol, summaries do not have to be accurate with the a tolerable degradation on hit ratio. The update of summaries is delayed until the percentage of new documents reaches a threshold. Bloom filters are used to reduce memory requirement of summaries. Whereas, no matter what techniques used for representation of summaries, the memory requirements always grows linearly with the number of proxies. So the scalability is still constrained by the cache size. To extend the scalability, The hierarchical summaries and proxies can possibly be used to reduce the memory requirement of both caches and summaries. Bloom filters also suffer more computational overhead both in lookup and in update.
I do not like much the experiments in the paper. It seems to me, at the first glance, no-ICP is the best one if you just look at table 2 and table 4, although the trace used in table 4 contains noticeable remote hits. I think ICP wins in that it reduces the traffic between proxies and servers when there are remote hits. But this kind of traffic is not measure in the experiments.
Paper: Reduce, Reuse, Recycle:An Approach to Building Large Internet Caches Summary
Reviewer:Abhishek Chandra
This paper describes a cooperative caching technique for web proxies, which uses a centralized directory methodology. The system described here, named CRISP, maintains a group of proxy caches, which coordinate among themselves using a centralized directory called a mapping server. Whenever a proxy needs to retrieve an object not in its cache, it consults the mapping server for the location of the object in one of the other proxies. If the object is not available in any of the caches, then the proxy goes to the source to fetch the object. The mapping server maintains an uptodate directory of the contents in the caches, and makes communication among the proxies faster.
Strengths and Weaknesses: The paper gives a good rationale behind revisiting a centralized algorithm technique inspite of its commonly assumed drawbacks. The paper also manages to show improvements in cache hits achieved by cache-cooperation. But the paper doesn't really compare the centralized approach with the distributed ones. Thus, although the paper makes a case for its approach in an intuitive fashion, it fails to give an empirical evidence of its usefulness. The paper also describes certain issues such as impact of latency on human perception, etc., which are not altogether convincing. Also, though the paper claims to reduce network traffic by eliminating multicast messages between proxies, it does not talk about the network traffic required to keep the mapping server uptodate. The paper is also not clear about the protocols used to detect failures and for recovery.
Reviewer: Shivakumar Murugesan
This paper deals with co-operating cache, CRISP. It is a distributed
cache but share a central directory of cached objects. The well-known single
point failure because of centralized directory is overcome by
proper configuration. This paper introduces distributed caching followed
by "supply-side" and "demand-side" approaches and stresses the importance
of demand-side caches. CRISP is a demand-side cache.
Advantages of CRISP over other caching mechanism is explained.
In CRISP, each client is attached to a proxy or caching server. The
client's requested-object is supplied by the proxy to which, it is attached.
If it is not in the proxy, the object is delivered to the client from other
proxies without accessing home site. A mapping server maintains a complete
cache directory and proxies notify the server
whenever they add or remove an object from the cache. Proxies can know
the global picture through the mapping server and adds the object if its
client needs it. CRISP is distinguished from other caches by the use of
centralized mapping service. Then the cost of the central mapping server
is analyzed. Conservative estimate shows that data that passes through
mapping server is only 1% of the aggregate size of the CRISP cache. The
scalability and latency bottleneck is addressed by the following reasons:
No object is passing through the mapping server,
Response time is well below the ability of human perception and mapping
server can be incrementally expanded.
The limitation of CRISP is that the round-trip time should be lower
than 20ms and can be achieved by giving prioritizing the internal traffic.
Single point failure is addressed by pointing out that even if the
server is down only the performance degrade because objects has to
be fetched directly from the home sites but don?t result in denial of service.
The failure of the proxies can be marked in the mapping server
making it public to all proxies. Load balancing is possible because
of central mapping. Experimental data are shown to show the benefits of
CRISP. Graphs showing hit ratio for varying aggregate cache size is
given and for different traces. It doesn't take into account the "distant
hits" which are less useful because it is efficient to get directly from
home site rather than to get from distant caches as explained in the other
paper. Inferences from the graphs are listed with explanations.
Reviewer: Vijay Sundaram
This paper presents a distributed Internet object caching scheme namely
CRISP (Caching and Replication for Internet Service Performance). In CRISP
each client is bound to one of several caching servers or
proxies, which cache objects on behalf of their clients. A URL fetch
request miss in the clients local cache is sent to the local proxy. Also
the CRISP cache is cooperative. This is achieved by forwarding the requested
URL to a mapping server.
CRISP complements the hierarchical nature of caches used in its predecessors.
Also by not multicasting the requests the load of cache probes is limited
to the mapping server in CRISP. The central mapping
service in CRISP is not a latency bottleneck or a significant barrier
to scalability. The mapping service can be expanded incrementally by adding
more mapping servers, dividing responsibilities by statically partitioning
the URL space across the servers.
Internet caches minimize access latencies of shared objects and improve
performance for all users by reducing the network traffic. The factors
considered while designing CRISP are the properties of web access
(read-only access to large objects with small URLs and the ability
to fall back to the Internet if the map fails) allow the simplicity and
benefits of a centralized structure. I would rate the paper as well written
and an easy read.
Reviewer: Osman S Unsal
This paper is concerned with a new distributed Internet object cache that was implemented at AT&T Labs. Crisp stands for Caching and Replication for Internet Service Performance. In the first half of the paper the Crisp architecture and its difference from other distributed caching schemes is described and in the second half a performance analysis study of Crisp is made by running traces.
As also specified in Raunak's report, the main feature of CRISP is its central directory of cached objects. This means that any proxy can probe the entire cache with a single unicast message exchange. In contrast, other cooperative internet caching proxies probe the cache by multicasting queries to all of its peers, potentially increasing network traffic.
Following are some of my impressions about the paper:
The paper describes a cache called CRISP that is an Internet object
cache targeted specifically for the organizations that aggregate the end
users of Internet services (for example, ISPs). It basically consists of
a group of
cooperating caching servers that share a central directory that contains
a list of cached objects.
The major drawback in the approach is its centralized design. Though
the paper argues that central mapping service is not a latency bottleneck
or a scalability barrier, but the reasons (the needs of the mapping server
cache
is small as it only contains URLs, human perception is slower, mapping
servers can be added) are not convincing. The idea seems to be similar
to that of a domain name server. Moreover, since the round trip latency
between
the proxy and the mapping server is assumed to be low, this will limit
the geographical size of the cache. CRISP manages to work even if the mapping
server fails because the objects cached are read-only. One other drawback
of
the paper is that it does not describe how the mapping server would
be updated. This can be a major design issue that has been overlooked in
the paper. As an aside, the approach will not work in tomorrows world of
dynamic
web content determined by the user who wants it.
The strong point of the paper is that it gives an approach that allows
ISPs to construct very large distributed caches as collections of proxy
servers, without sacrificing the benefits of a shared cache.
Paper: Not All Hits are Created Equal: Cooperative Proxy Caching Over a Wide-Area Network
Reviewer:Michael Bradshaw
Cooperative Proxies have been shown to increase the performance and bandwith available to their clients. However, some topologies might create circumstances in which fetching from the internet would be quicker than fetching from a cooperative proxy. The paper outlines an extension of the summary cache system which stores a directory of each cache's contents in bloom filters assosiated with a distance cost.
Good points
1) They make a good point that there might be limits to cooperation.
2) Technique is very simple to build into exiting protocols.
Bad Points
1) There is not data on the usefullness of the algorithim, ie what
paramaters would have to exist before it became usefull. Furthermore there
is not any concrete example of a system that might need this technique.
2) Content updates to other proxies occur with a push model which may
become synchronized and lead to burstiness in the communication.
3) The bandwidth between proxies seems to be dedicated to that action.
It seems that it might be better to press the dedicated bandwidth to get
cashed objects so that uncashed objects can have more bandwidth and to
decrease server load.
Reviewer : Shiva Murugesan
This paper mainly stresses on reducing distant hits and recommends that
object can be fetched directly from home site than from distant cache.
Cache gets object from their neighbors, which are separated by the
distance less than the threshold value. Shortest path is followed in
the presence of more than one path. Related works like broadcast probe,
hash-partitioning of the object and direct service of CRISP is briefly
explained. Time stamp per node is followed to achieve ordering of the
messages (notifications) sent by a node. An example is given to show how
the ordering of notifications is important. The steps (algorithm)
executed by the node upon receiving a notification is given.
Paper: A Survey of Co-operative caching
Reviewer : Osman Unsal
The survey is well structured and includes an excellent reference list. The probing of the subject is through starting from the historical background of caching in distributed file systems where cooperative caching concept first originated. It then moves on to hierarchical caching. This is followed by the hybrid approach of cluster-based caching which leads the way to distributed caching. After this methodological probe, the cache to cache communication protocols are examined. Overall the focus is on caching architecture and cache-to-cache communication.
As a final note, since I am just introduced to cooperative internet
caching, my first impression after reading the survey and the papers was
that the current research in this domain is characterized by the pervasiveness
of read-only caching. This can be clearly explained by the current internet
scene: most of the traffic is read-only and directed towards client browsers
consuming static text/image information. Another characteristic of the
current research is the relatively less attention given to replacement
policies. This can be also explained by the observation that due to the
access patterns to the cooperative cache, capacity misses are not yet a
big concern; in fact experiments suggest that even with an infinite cache
size the hit ratio does not increase significantly.
Reviewer : Shiva Murugesan
This paper deals with the classification of co-operative caching mechanisms
and explains each one of them along with their advantages and disadvantages.
It classifies caching as hierarchical, distributive and
clustered multicast based. A brief background is given on the general
caching architecture and some of the observations in LAN environment. Algorithms
like Direct Client Co-operation, Greedy Forwarding, Centrally
Co-ordinated Caching and N-chance Forwarding are explained. The relevance
of File system caching with these algorithms is missing.
Harvest project is explained as example for Hierarchical Caching. It
is a non-blocking and single threaded cache with page faults as only source
of blocking. The extension of harvest (squid) is briefly explained.
The additions are the concept of private and public objects, weights
to cache peers for object resolution. Then the clustered multicast caching
is surveyed by giving Adaptive web caching, Dynamic web caching and LSAM
as example.
Proxy Sharing, CRISP and Cachemesh are showed as example for distributed
caching. Proxy sharing differs from CRISP by the absence of centralized
mapping server. The proxy finds out the object without mapping
directory and sends it to the client. The new technique in cachemesh
is that it used cache routing to get objects from its peer. Communication
protocols surveyed are : Internet cache protocol, cache digests, summary
cache, cache array routing protocol, hyper text caching protocol and
web cache control protocol. A comparison is done between hierarchical and
distributed caching. Advantages of distributed caching are shorter
transmission time, higher bandwidth, better distribution in terms of
load and advantages of hierarchical caching is shorter connection times.
A hybrid scheme is proposed to take advantages of both the
caches.