楼主: liuxf666
756 8

[学习笔记] [学习笔记]System Atchitecture 2 - Components IV [推广有奖]

  • 1关注
  • 3粉丝

学科带头人

54%

还不是VIP/贵宾

-

威望
0
论坛币
13008 个
通用积分
410.0729
学术水平
109 点
热心指数
112 点
信用等级
103 点
经验
71224 点
帖子
1081
精华
0
在线时间
1537 小时
注册时间
2016-7-19
最后登录
2024-3-20

+2 论坛币
k人 参与回答

经管之家送您一份

应届毕业生专属福利!

求职就业群
赵安豆老师微信:zhaoandou666

经管之家联合CDA

送您一个全额奖学金名额~ !

感谢您参与论坛问题回答

经管之家送您两个论坛币!

+2 论坛币
Cache
    
Caching improves page load times and can reduce the load on your servers and databases.  In this model, the dispatcher will first lookup if the request has been made before and try to find the previous result to return, in order to save the actual execution.
Databases often benefit from a uniform distribution of reads and writes across its partitions.  Popular items can skew the distribution, causing bottlenecks.  Putting a cache in front of a database can help absorb uneven loads and spikes in traffic.
Client cachingCaches can be located on the client side (OS or browser), server side, or in a distinct cache layer.
CDN cachingCDNs are considered a type of cache.
Web server cachingReverse proxies and caches such as Varnish can serve static and dynamic content directly.  Web servers can also cache requests, returning responses without having to contact application servers.
Database cachingYour database usually includes some level of caching in a default configuration, optimized for a generic use case.  Tweaking these settings for specific usage patterns can further boost performance.
Application cachingIn-memory caches such as Memcached and Redis are key-value stores between your application and your data storage.  Since the data is held in RAM, it is much faster than typical databases where data is stored on disk.  RAM is more limited than disk, so cache invalidation algorithms such as least recently used (LRU) can help invalidate 'cold' entries and keep 'hot' data in RAM.
Redis has the following additional features:
  • Persistence option
  • Built-in data structures such as sorted sets and lists
There are multiple levels you can cache that fall into two general categories: database queries and objects:
  • Row level
  • Query-level
  • Fully-formed serializable objects
  • Fully-rendered HTML
Generally, you should try to avoid file-based caching, as it makes cloning and auto-scaling more difficult.
Caching at the database query levelWhenever you query the database, hash the query as a key and store the result to the cache.  This approach suffers from expiration issues:
  • Hard to delete a cached result with complex queries
  • If one piece of data changes such as a table cell, you need to delete all cached queries that might include the changed cell
Caching at the object levelSee your data as an object, similar to what you do with your application code.  Have your application assemble the dataset from the database into a class instance or a data structure(s):
  • Remove the object from cache if its underlying data has changed
  • Allows for asynchronous processing: workers assemble objects by consuming the latest cached object
Suggestions of what to cache:
  • User sessions
  • Fully rendered web pages
  • Activity streams
  • User graph data
When to update the cacheSince you can only store a limited amount of data in cache, you'll need to determine which cache update strategy works best for your use case.
Cache-aside
    
The application is responsible for reading and writing from storage.  The cache does not interact with storage directly.  The application does the following:
  • Look for entry in cache, resulting in a cache miss
  • Load entry from the database
  • Add entry to cache
  • Return entry
def get_user(self, user_id):
    user = cache.get("user.{0}", user_id)
    if user is None:
        user = db.query("SELECT * FROM users WHERE user_id = {0}", user_id)
        if user is not None:
            key = "user.{0}".format(user_id)
            cache.set(key, json.dumps(user))
    return user
Memcached is generally used in this manner.
Subsequent reads of data added to cache are fast.  Cache-aside is also referred to as lazy loading.  Only requested data is cached, which avoids filling up the cache with data that isn't requested.
Disadvantage(s): cache-aside
  • Each cache miss results in three trips, which can cause a noticeable delay.
  • Data can become stale if it is updated in the database.  This issue is mitigated by setting a time-to-live (TTL) which forces an update of the cache entry, or by using write-through.
  • When a node fails, it is replaced by a new, empty node, increasing latency.
Write-through
    

The application uses the cache as the main data store, reading and writing data to it, while the cache is responsible for reading and writing to the database:
  • Application adds/updates entry in cache
  • Cache synchronously writes entry to data store
  • Return
Application code:
set_user(12345, {"foo":"bar"})
Cache code:
def set_user(user_id, values):
    user = db.query("UPDATE Users WHERE id = {0}", user_id, values)
    cache.set(user_id, user)
Write-through is a slow overall operation due to the write operation, but subsequent reads of just written data are fast.  Users are generally more tolerant of latency when updating data than reading data.  Data in the cache is not stale.
Disadvantage(s): write through
  • When a new node is created due to failure or scaling, the new node will not cache entries until the entry is updated in the database.  Cache-aside in conjunction with write through can mitigate this issue.
  • Most data written might never be read, which can be minimized with a TTL.
Write-behind (write-back)
    
In write-behind, the application does the following:
  • Add/update entry in cache
  • Asynchronously write entry to the data store, improving write performance
Disadvantage(s): write-behind
  • There could be data loss if the cache goes down prior to its contents hitting the data store.
  • It is more complex to implement write-behind than it is to implement cache-aside or write-through.
Refresh-ahead
    
You can configure the cache to automatically refresh any recently accessed cache entry prior to its expiration.
Refresh-ahead can result in reduced latency vs read-through if the cache can accurately predict which items are likely to be needed in the future.
Disadvantage(s): refresh-ahead
  • Not accurately predicting which items are likely to be needed in the future can result in reduced performance than without refresh-ahead.
Disadvantage(s): cache
  • Need to maintain consistency between caches and the source of truth such as the database through cache invalidation.
  • Cache invalidation is a difficult problem, there is additional complexity associated with when to update the cache.
  • Need to make application changes such as adding Redis or memcached.

二维码

扫码加我 拉你入群

请注明:姓名-公司-职位

以便审核进群资格,未注明则拒绝

关键词:Components Component System ATCH comp

已有 3 人评分经验 论坛币 学术水平 热心指数 信用等级 收起 理由
kongqingbao280 + 40 精彩帖子
np84 + 100 精彩帖子
经管之家编辑部 + 100 + 3 + 3 + 3 精彩帖子

总评分: 经验 + 140  论坛币 + 100  学术水平 + 3  热心指数 + 3  信用等级 + 3   查看全部评分

本帖被以下文库推荐

为您点赞!

使用道具

藤椅
充实每一天 发表于 2019-4-6 07:48:53 来自手机 |只看作者 |坛友微信交流群
点赞

使用道具

板凳
珍惜点滴 学生认证  发表于 2019-4-6 09:50:35 |只看作者 |坛友微信交流群
感谢分享,

使用道具

报纸
从1万到一亿 在职认证  发表于 2019-4-6 10:19:33 |只看作者 |坛友微信交流群

使用道具

地板
hifinecon 发表于 2019-4-6 10:31:43 |只看作者 |坛友微信交流群

使用道具

7
twt05 在职认证  发表于 2019-4-6 16:10:46 |只看作者 |坛友微信交流群
谢谢分享,支持一下。

使用道具

8
sulight 学生认证  发表于 2019-4-6 18:45:05 |只看作者 |坛友微信交流群
谢谢分享,
Caching at the object levelSee your data as an object, similar to what you do with your application code.  Have your application assemble the dataset from the database into a class instance or a data structure(s):
Remove the object from cache if its underlying data has changed
Allows for asynchronous processing: workers assemble objects by consuming the latest cached object
Suggestions of what to cache:
User sessions
Fully rendered web pages
Activity streams
User graph data

使用道具

9
苏亮480 发表于 2019-4-6 19:22:30 |只看作者 |坛友微信交流群
谢谢分享,
Disadvantage(s): refresh-ahead
Not accurately predicting which items are likely to be needed in the future can result in reduced performance than without refresh-ahead.
Disadvantage(s): cache
Need to maintain consistency between caches and the source of truth such as the database through cache invalidation.
Cache invalidation is a difficult problem, there is additional complexity associated with when to update the cache.
Need to make application changes such as adding Redis or memcached.

使用道具

您需要登录后才可以回帖 登录 | 我要注册

本版微信群
加JingGuanBbs
拉您进交流群

京ICP备16021002-2号 京B2-20170662号 京公网安备 11010802022788号 论坛法律顾问:王进律师 知识产权保护声明   免责及隐私声明

GMT+8, 2024-5-2 01:32