分类 Web 下的文章

使用免费的SSL证书Let's Encrypt

对于个人或者小型公司来说,申请正规的SSL证书,还是需要一定的费用的。目前,免费SSL证书也挺多,但很多都是只有1年的免费期,推荐使用Let's Encrypt证书,虽然其默认只有90天,但通过脚本,可以自动续费,理论上可以无限期使用,还是很方便的。

推荐使用acme脚本来自动生成、自动续费SSL证书。

curl https://get.acme.sh | sh

下载后,脚本会自动安装到/root/.acme.sh目录下,首先注册一个账号

/root/.acme.sh/acme.sh --register-account -m xxxx@163.com

vi create_ssl.sh

export DP_Id="100000"
export DP_Key="xxxxxxxx"
~/.acme.sh/acme.sh --issue --dns dns_dp -d tech126.com -d *.tech126.com

根据自己的域名解析用的工具,自行修改上面脚本的参数,我使用的是DNSPod,因此对应的是DNSPod的API ID和key。

执行完上述脚本后,就会在/root/.acme.sh/tech126.com下面生成相应的证书文件

接下来修改nginx的配置文件

listen 443 ssl;
charset utf8;

ssl_certificate ssl/tech126.com.cer;
ssl_certificate_key ssl/tech126.com.key;
ssl_session_cache shared:SSL:20m;
ssl_session_timeout 10m;
ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;

增加80端口的自动跳转

server {
listen :80;

server_name tech126.com www.tech126.com;

rewrite ^(.)$ https://$host$1 permanent;
}

重启nginx,之后在wordpress的admin管理后台,修改网站主页为https即可。

【转】Developers Corner: Maximum concurrent connections to the same domain for browsers

Don't be too surprise if you never heard about it as I have seen many web developers missed this crucial point. If you want to have quick figure, this table is from the book PROFESSIONAL Website Performance: OPTIMIZING THE FRONT END AND THE BACK END by Peter Smith

max-browser-parallel-connection

The impact of this limit 

How this limit will affect your web page? The answer is a lot. Unless you let user load a static page without any images, css, javascript at all, other while, all these resources need to queue and compete for the connections available to be downloaded. If you take into account that some of the resources depend on other resource to be loaded first, then it is easy to realize that this limit can greatly affect page load time.

Let analyse further on how browser load a webpage. To illustrate, I used Chrome v34 to load one article of my blog (10 ideas to improve Eclipse IDE usability). I prefer Chrome over Firebug because its Developer Tool has the best visualization of page loading. Here is how it looks like

chrome_34
 I already crop the loading page but you should still see a lot of requests being made. Don't be scared by the complex picture, I just want to emphasize that even a simple webpage need many HTTP requests to load. For this case, I can count of 52 requests, including css, images, javascript, AJAX, html.

If you focus on the right side of the picture, you can notice that Chrome did a decent job of highlighting different kind of resources with different colours and also manage to capture the timeline of requests.

Let see what Chrome told us about this webpage. At first step, Chrome load the main page and spend a very short time parsing it. After reading the main page, Chrome send a total of 8 parallel requests almost at the same times to load images, css and javascript. For now, we know that Chrome v34 can send up to 8 concurrent request to a domain. Still, 8 requests are not enough to load the webpage and you can see that some more requests are being sent after having available connection.

If you still want to dig further, then we can see that there are two javascripts and one AJAX call (the 3 requests at the bottom) are only being sent after one of the javascript is loaded. It can be explained as the execution of javascript trigger some more requests. To simplify the situation, I create this simple flowchart

google_blog
I tried my best to follow colour convention of Chrome (green for css, purple for images and light blue for AJAX and html). Here is the loading agenda

 

  • Load landing page html
  • Load resources for landing pages
  • Execute javascript, trigger 2 API calls to load comments and followers.
  • Each comment and follower loaded will trigger avatar loading.
  • ...
So, in minimum you have 4 phases of loading webpage and each phase depend on the result of earlier phase. However, due to the limit of 8 maximum parallel requests, one phase can be split into 2 or more smaller phases as some requests are waiting for available connection. Imagine what will happen if this webpage is loaded with IE6 (2 parallel connections, or minimum 26 rounds of loading for 52 requests)?


Why browsers have this limit?

You may ask if this limit can have such a great impact to performance, then why don't browser give us a higher limit so that user can enjoy better browsing experience. However, most of the well-known browsers choose not to grant your wish, so that the server will not be overloaded by small amount of browsers and end up classifying user as DDOS attacker.

In the past, the common limit is only 2 connections. This may be sufficient in the beginning day of web pages as most of the contents are delivered in a single page load. However, it soon become the bottleneck when css, javascript getting popular. Because of this, you can notice the trend to increase this limit for modern browsers. Some browsers even allow you to modify this value (Opera) but it is better not to set it too high unless you want to load test the server.

How to handle this limit?

This limit will not cause slowness in your website if you manage your resource well and not hitting the limit. When your page is first loaded, there is a first request which contain html content. When the browser process html content, it spawn more requests to load resource like css, images, js. It also execute javascript and send Ajax requests to server as you instruct it to do.

Fortunately, static resources can be cached and only be downloaded the first time. If it cause slowness, it happen only on first page load and is still tolerable. It is not rare that user will see a page frame loaded first and some pictures slowly appear later later. If you feel that your resources is too fragmented and consume too many requests, there are some tools available that compress and let browser load all resources in single request (UglifyJS, Rhino, YUI Compressor, ...)

Lack of control on Ajax requests cause more severe problem. I would like to share some sample of poor design that cause slowness on page loading.

1. Loading page content with many Ajax requests

This approach is quite popular because it let user feel the progress of page loading and can enjoy some important parts of contents while waiting for the rest of contents to be loaded. There is nothing wrong with this but thing is getting worse when you need more requests to load content that the browser can supply you with. Let say if you create 12 Ajax requests but your browser limit is 6, in best case scenario, you still need to load resources in two batches. It is still not too bad if these 12 requests are not nesting or consecutive executed. Then browser can make use of all available connections to serve the pending requests. Worse situation happen when one request is initiated in another request callback (nested Ajax requests). If this happen, your webpage is slowed down by your design rather than by browser limit.

Few years ago,  I took over one project, which is haunted with performance issue. There are many factors that causing the slowness but one concern is too many Ajax requests. I opened browser in debug mode and found more than 6 requests being sent to servers to load different parts of page. Moreover, it is getting worse as the project is delivered by teams from different continents, different time zone. Features are developed in parallel and the developer working on a feature conveniently add server endpoint and Ajax request to let work done. Worrying that the situation is going out of control, we decided to shift the direction of development. The original design is like this:

old_architect

For most of Ajax requests, the response return JSON model of data. Then, the Knock-out framework will do the binding of html controls with models. We do not face the nested requests issue here but the loading time cannot be faster because of browser limit and many http threads is consumed to serve a single page load. One more problem is the lack of caching. The page contents are pretty static with minimal customization on some parts of webpages.

After consideration, we decided to do a reset to the number of requests by generating page contents in one page. However, if you do not do it properly, it may become like this:

serial


This is even worse than original design. It is more or less equal to having the limit of 1 connection to server and all the requests are handled one by one.

The proper way to achieve similar performance use Aysnc Programming

concurrent

Each promise can be executed in a separate thread (not http thread) and the response is returned when all the promises are completed. We also apply caching to all of the services to ensure the service to return quickly. With the new design, the page response is faster and server capacity is improved as well.

2. Fail to manage the request queue

When you make a Ajax request in javascript and browser do not have any available connection to serve your request, it will temporarily put the request to the request queue. Disaster happens when developers fail to manage the request queue properly. This often happens with rich client application. Rich client application functions more like an application than a web page. Clicking on button should not trigger loading new web address. Instead, the page content is uploaded with result of Ajax requests. The common mistake is to let new requests to be created when you have not managed to clean up the existing requests in queue.

I have worked on a web application that make more than 10 Ajax requests when user change value of a first level combo box. Imagine what happen if user change the value of the combo box 10 times consecutively without any break in between? There will be 100 Ajax requests go to request queue and your page seem hanging for a few minutes. This is an intermittent issue because it only happen if user manage to create Ajax requests faster than the browser can handle.

The solution is simple, you have two options here. For the first option, forget about rich client application, refreshing the whole page to load new contents. To persist the value of the combo box, store it as a hash attached to the current URL address. In this case, browser will clear up the queue. The second option is even simpler, block user from making change to combo box if the queue is not yet cleared. To avoid bad experience, you can show the loading bar while disabling the combo box.

3. Nesting of Ajax requests

I have never seen a business requirement for nesting Ajax request. Most of the time I saw nesting request, it was design mistake. For example, if you are a lazy developer and you need to load flags for every country in the world, sorting by continent. Disaster happen when you decide to write code this way:

  • Load the continent list
  • For each continent, loading countries
Assume the world have 5 continents, then you spawn 1 + 5 = 6 requests. This is not necessary as you can return a complex data structure that contain all of these information. Making requests is expensive, making nesting request is very expensive, using Facade pattern to get what you want in a single call is the way to go. 

Drupal的CKEditor插件

为了便于产品同学整理发布文档,我们就自己搭建了一个Drupal系统

并安装了ckeditor插件,方便编辑静态页面

但发现ckeditor有个问题,它会自动移除掉空的tag,如一个空的span

查了好多文章,说是通过设置fillEmptyBlocks可以解决,但尝试了无果

后来,就单独加了一个protectedSource的正则,把span的给过滤掉了

编辑modules/ckeditor/ckeditor.config.js配置文件,增加:

config.protectedSource.push(/<span.*?><\/span>/gi); 

清除缓存,再试就ok了

 

另外:如果页面引用了自己的css,想在ckeditor中看出效果

可以配置该插件,设置CSS FILE PATH的路径即可

 

http协议中的keep-alive

通常,我们说的http长连接,实际上是包含2种不同的含义:

  1. comet,是服务器和浏览器之间维持一个长时间的http连接,用于服务器消息的实时推送,见Web应用中的Comet技术,这种模式下,浏览器只会发送一次request请求,而server端会不断吐出消息给浏览器端,直到超时或者手工终止连接
  2. keep-alive,是http协议中定义的一个规范,它是利用同一个tcp连接处理多个http请求和响应,节省了tcp连接3次握手的开销,同时也就减少了后续请求的延时

HTTP/1.0
在HTTP/1.0版本中,并没有官方的标准来规定Keep-Alive如何工作,因此实际上它是被附加到HTTP/1.0协议上,如果客户端浏览器支持Keep-Alive,那么就在HTTP请求头中添加一个字段 Connection: Keep-Alive,当服务器收到附带有Connection: Keep-Alive的请求时,它也会在响应头中添加一个同样的字段来使用Keep-Alive。这样一来,客户端和服务器之间的HTTP连接就会被保持,不会断开(超过Keep-Alive规定的时间,意外断电等情况除外),当客户端发送另外一个请求时,就使用这条已经建立的连接
 
HTTP/1.1
在HTTP/1.1版本中,默认情况下所在HTTP1.1中所有连接都被保持,除非在请求头或响应头中指明要关闭:Connection: Close,所以在1.1版本中,Connection: Keep-Alive字段实际上已经意义不大了


现有的Apache/Nginx等都支持KeepAlive模式,在nginx中配置:

keepalive_timeout  65;

如果想关闭keepalive,只需将keepalive_timeout设为0即可

resin/tomcat等容器也都是支持Keepalive的,JDK中的URL类同样默认支持keepalive

 

要想真正理解keep-alive长连接模式,我们可以使用wireshark或tcpdump去抓包

对于一个没有开启keep-alive的server,response会返回Connection: Close,然后tcp连接马上就关闭了,见下图

如果开启了keep-alive,那么会在一个tcp连接上发送多个http请求

而且是在keepalive超时后,tcp连接才关闭,下图可以看出过了16秒后tcp连接才关闭

 

另:推荐一篇文章:HTTP长连接

HTTP Strict Transport Security

HTTP Strict Transport Security现在还是一个草案

详细细节见http://tools.ietf.org/html/draft-hodges-strict-transport-sec-02

Wiki见http://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security

Chrome中的介绍见http://dev.chromium.org/sts

HSTS实际上是定义了一个header,浏览器读取到这个header后

会强制把http的请求转换成https的请求,不需要后台应用做处理

可惜现在只支持Firefox/Chrome,相信未来会支持更多的浏览器

增加的header如下:

Strict-Transport-Security: max-age=16070400; includeSubDomains

其中,max-age指定浏览器多长时间内自动对该域名使用HSTS

includeSubDomains表明包括子域名

从抓包里能看到,第一次的http请求,被浏览器给abort了,之后又重新发送一个https的请求

在max-age的有效期内,如果访问http://passport.sohu.com浏览器也会自动跳转到https://passport.sohu.com

Chrome的几个常用插件

  1. 网页截图: https://chrome.google.com/webstore/detail/cpngackimfmofbokmjmljamhdncknpmg
  2. DNS Refresh
  3. 快捷工具: https://chrome.google.com/webstore/detail/fjccknnhdnkbanjilpjddjhmkghmachn?hl=zh-CN
  4. Proxy SwitchySharp: https://chrome.google.com/webstore/detail/dpplabbmogkhghncfbfdeeokoefdjegm
  5. FireBug Lite: https://chrome.google.com/webstore/detail/bmagokdooijbeehmkpknfglimnifench?hl=zh-CN
  6. Axure:https://chrome.google.com/webstore/detail/axure-rp-extension-for-ch/dogkpdfcklifaemcdfbildhcofnopogp?hl=zh-CN
  7. 恢复最近的tab:https://chrome.google.com/webstore/detail/%E6%81%A2%E5%A4%8D%E6%9C%80%E8%BF%91%E5%85%B3%E9%97%AD%E7%9A%84%E6%A0%87%E7%AD%BE/jeimjmcpmonhlamlphijjiemdkepnnog?hl=zh-CN

使用Nginx+KeepAlived构建高可用的负载均衡系统

利用Nginx和KeepAlived做7层的负载均衡,并能实现HA,是硬件负载均衡F5的一个替代方案

另外,还可以使用LVS+KeepAlived做4层的转发,实现负载均衡

对于简单的7层转发,我们使用Nginx来处理基本也就够用了

Nginx和KeepAlived的安装就不说了,只说明一下大概的环境和配置

环境:

2台Server,分别有2个网卡,eth0连接内网,IP地址为10.x.x.x/31

eth1连接公网,但没有为其分配静态的公网IP,通过KeepAlived动态绑定公网VIP为x.x.x.x

DNS解析域名到VIP上

配置:

vi  /etc/keepalived/keepalived.conf

 

global_defs {

   notification_email {

        xxx@sohu.com

   }

   notification_email_from xxxx@sohu.com

   smtp_server 192.168.x.x

   smtp_connect_timeout 30

   router_id LVS_DEVEL

}


vrrp_script check_nginx {

     script "/opt/scripts/keepalived/check_nginx.sh"

     interval 1

}


vrrp_instance TanChuang_1 {

    state BACKUP

    interface eth0

    virtual_router_id 77

    priority 80

    advert_int 1

    smtp_alert

    authentication {

        auth_type PASS

        auth_pass xxxx

    }

    track_script {

        check_nginx

    }

    virtual_ipaddress {

        x.x.x.x/24 dev eth1

    }

    virtual_routes {

        via x.x.x.x dev eth1 

    }

    notify /opt/scripts/keepalived/send_msg.sh

}

几个说明的地方:

1. state BACKUP只是初始的状态,最后2个Server谁能成为Master是由priority决定的

2. interface eth0是KeepAlived之间发送心跳包的网卡,该设备必须得有固定的IP地址,否则2个Server之间无法通信

由于我们的环境中,eth1是没有固定ip的,所以此处只能设置为eth0,否则会报错:

cant do IP_ADD_MEMBERSHIP errno=No such device

3. virtual_router_id为VRID标识,2个Server必须一致

4. track_script自定义测试服务状态的脚本,默认情况下只有Server宕了或者KeepAlived宕了,才会发生IP漂移

我们自定义一个脚本,检测nginx进程是否存活,如果没有了,则kill掉KeepAlived,使状态强制发生转换

5. virtual_routes设置状态切换时,增加和删除的路由。由于eth1上没有固定公网IP,所以,必须在状态切换时,动态增加到公网的默认路由

6. notify为发生切换时的发报警短信的脚本,可自动向脚本传递3个参数,$1(GROUP—INSTANCE),$2(group或instance名称),$3(MASTER—BACKUP—FAULT)

在我们的配置中,只使用了一个公网ip,用它来作为VIP,同时做DNS解析,对外提供服务

 

Nginx+KeepAlived实现高可用性,实际上可以有以下几种模式:

1. 2台Server做了双机热备,正常情况下,只有一台Server对外提供服务,当其中一台Server有问题时,另外一台自动接管服务。我们上面的配置就是这种模式

2. 2台Server互为主备,配置2个Instance,设置2个VIP,做DNS轮询。正常情况下,2台Server都可以对外提供服务,当其中一台Server有问题时,则另外一台会处理所有的请求。这种模式,在并发量很高时,其中一台挂了,所有请求都转发到另外一台上,很有可能造成另外的Server也崩溃

3. 3台Server,2台对外提供服务,同时分别和第3个Server实现备份,这样,当一台Server出现问题时,还能保证有2台Server对外提供服务,增加了系统的稳定性

Velocity会议的一点小结

前2天,去参加了Velocity Web性能和运维大会,听了不少的讲座

有一些讲座还是不错的,感觉能收获不少东西,像FaceBookYahooTaobao...

也有不少,听了后基本没啥收获,有些是一些理论性的东西,框架性的东西,和实际应用完全不搭边,听了后也没啥感觉

总体感觉:这个会还是不错的,去参加还是挺值得的

下面是我总结的几点,可能对我们有用的东西:

1.  Lvs + KeepAlived做负载均衡,实现F5的功能。DBA那边现在据说是在尝试使用KeepAlived做数据库的故障自动切换

2.  Cache,不光是服务器端的缓存(有效利用新机器的大内存),Client端也可以多Cache一些东西,提高响应速度。Taobao说他们的一些后端Cache命中率能达到98%

3.  多使用YSlowPageSpeed工具做前端性能调优,利用webpagetest.org去测试自己的页面

4.  BigPipe,可以尝试下FaceBook的这个技术,将大页面划分为一个一个的pagelet,然后并行输出和render

5.  合并一些http请求,尽量减少请求数量,mail中需要做很多改进,像udisk,mobile,passport,vote...

6.  前端js的打包,拆分策略,并不是包越少越好,需要根据各js的用途,还有log信息去分析,如何合并js,使用户尽快进到欢迎页面...

7.  尽量使用无阻塞的js,让页面元素去并行的加载(可以结合页面的WaterFall图去改进),尽快将部分页面呈现给用户

8.  严格控制cookie数量,减小cookie的大小,可以尝试服务器端的session去代替一些cookie。也许可以借鉴下FaceBook经验,写一个小程序自动清除不认识的cookie...

9.  Prefetch预加载策略,并不是所有人,在任何时间段都需要预加载,要根据用户的活跃情况,服务器的负载,用户的终端(比如通过3G上网卡上网)等来确定是否预加载。在mail中,是否所有用户都需要预加载第二页的列表呢?也许只有第一页中有未显示完的未读邮件,才需要去preload第二页...

10.前端优化是一个长期的工作,需要坚持,并不断的改进提高...

最后还有很重要的一点,就是Innovation(创新)

像FaceBook提出的Quickling, BigPipe等一些理论

听了人家的讲解后,发现也挺简单的

但最关键的问题是它们有效的组合了现有的技术

创造性的提出想法,并实施,最终提高Web的性能,这就是创新...

AS的几点心得

今天,Sohu邮箱上线了Flash进度条显示和Flash上传附件功能

通过和前端工程师协作,项目进展虽然磕磕碰碰的,但最终总算还是上线了

关于Flash AS相关东西,有以下几点心得(可能不完全对....):

  1. URLLoader类,在AS中使用它来load相应的js文件。请求虽然是在Flash中,但貌似Flash还是借用浏览器来发送相应的Http请求,UA什么的都和浏览器完全一样,Cookie也能相应的带到后端去,这样的请求在HttpWach中是可以监听到
  2. FileReference类,在AS中使用它来完成附件上传。刚开始以为它Post到后台的是二进制流,后来发现,它post过去的数据和普通文件上传是一致的,是multipart/form-data。上传过程是Flash来实现的,在HttpWatch中是监听不到的,但Fiddler是可以看到的,UA是shockwave flash。
  3. 上传时的身份验证:据说FileReference是有Bug的,在IE浏览器中,它能把所有的Cookie带到Server端。但在FireFox,Opera,Safari之类的浏览器,它不能携带相应的cookie,而是带着IE浏览器的持久Cookie(不是Session有效的Cookie)到Server端,这个实现的确很变态,也令人很无语。原本想用js读取Cookie,然后传递给Flash到后台验证,但Passport的Cookie有一个是HttpOnly的,所以很杯具,只好实现一个新的认证机制了.....
  4. 用AS去下载js,实现loading进度,当初做这个是为了能让进度条更平滑,更逼真一些。现在的方案是AS中去下载js,然后回调js函数,利用浏览器的缓存,再动态生成一个javascript标签。这样问题就出现了,如果浏览器禁用了缓存,那么就会重复load相应的js文件2遍,至今还没找到什么好的解决方案....
  5. Flash8.5以上的版本才支持AS3,所以需要严格判断Flash的版本

Web中Cache相关的header

1. 控制页面缓存相关:



折叠复制代码




  1. #Request相关的Headers:

  2. #HTTP/1.0协议,不缓存页面

  3. Pragma: no-cache

  4. #HTTP/1.1协议,不缓存,请求后得到200的响应值

  5. Cache-Control: no-cache


  6.  

  7. #response相关的Headers:

  8. #相对的过期时间,20s后过期

  9. Cache-Control: max-age=20

  10. #固定的过期时间,等同max-age,但是如果同时存在,则被Cache-Control的max-age覆盖

  11. Expires: Mon, 02 Aug 2010 05:15:16 GMT




2. 检查页面缓存相关:



折叠复制代码




  1. #以下是Response的headers:

  2. #页面最后的修改时间

  3. Last-Modified: Wed, 28 Jul 2010 09:46:14 GMT

  4. #页面的Etag,因为Last-Modified只能精确到秒,如果某个文件在一秒内修改了多次,那么Cache的判断可能就不准确了,所以才增加了Etag的判断

  5. Etag: "a21619-7e76-4b03666f"


  6. #以下是Request的headers:

  7. #check文件是否修改

  8. If-Modified-Since: Wed, 18 Nov 2009 03:13:51 GMT

  9. #check对应的Etag

  10. If-None-Match: "eae2fb-7e76-4b03666f"




3. 页面cache的检查流程:



  • Client首次请求页面,Server端返回200 OK,同时,返回如下的header,表明该文件20s后过期:


  • 折叠复制代码




    1. 00:00:04.494 ! 0.009 924 1550 GET 200 gif http://mail.sohu.com/test/test.gif

    2.  

    3. #Response headers

    4. Etag: "a21619-7e76-4b03666f"

    5. Last-Modified: Wed, 28 Jul 2010 09:46:14 GMT

    6. Expires: Mon, 02 Aug 2010 05:15:16 GMT

    7. Cache-Control: max-age=20




  • 用户如果在cache失效之前,在地址栏按回车,则Client会直接从Cache中读取,此时不会往Server发请求。如果Cache已经过期,则发送的请求和按F5的一样:


  • 折叠复制代码




    1. 00:01:07.468 ! 0.002 0 0 GET (Cache) gif http://mail.sohu.com/test/test.gif




  • 用户按F5刷新,此时Client会带着If-Modified-Since和Etag到Server端去请求,Server会检查页面的Last-Modified和Etag是否有变化,若没有变化,则返回304,同时重新设置新的Expires时间。如果文件有变化,则会返回200


  • 折叠复制代码




    1. 00:00:00.000 ! 0.012 949 220 GET 304 gif http://mail.sohu.com/test/test.gif

    2.  

    3. #Request header:

    4. If-Modified-Since: Wed, 28 Jul 2010 09:46:14 GMT

    5. If-None-Match: "a2161c-7fbb-4b26ed49"

    6.  

    7. #Response header:

    8. Expires: Mon, 02 Aug 2010 06:46:40 GMT

    9. Cache-Control: max-age=20

    10. Etag: "a2161c-7fbb-4b26ed49"




  • 用户按Ctrl+F5刷新,此时Client会加上no-cache的header去请求,Server端直接返回200


  • 折叠复制代码




    1. 00:02:13.149 ! 0.008 924 1550 GET 200 gif http://mail.sohu.com/test/test.gif

    2.  

    3. #Request header;

    4. Cache-Control: no-cache

    5.  

    6. #Response header:

    7. Last-Modified: Wed, 28 Jul 2010 09:46:14 GMT

    8. Expires: Mon, 02 Aug 2010 06:48:53 GMT

    9. Cache-Control: max-age=20




  •   用户清除本地缓存后再请求,则和首次请求该页面一样,Server端返回200

最新文章

最近回复

  • feifei435:这两个URI实际是不一样的
  • zsy: git push origin 分支 -f 给力!
  • 冼敏兵:简单易懂,good fit
  • Jack:无需改配置文件,看着累! # gluster volume se...
  • Mr.j:按照你的方法凑效了,折腾死了。。。。
  • zheyemaster:补充一句:我的网站路径:D:\wamp\www ~~菜鸟站长, ...
  • zheyemaster:wamp2.5(apache2.4.9)下局域网访问403错误的...
  • Git中pull对比fetch和merge | 炼似春秋:[…] 首先,我搜索了git pull和git fe...
  • higkoo:总结一下吧, 性能调优示例: gluster volume s...
  • knowaeap:请问一下博主,你维护的openyoudao支持opensuse吗

分类

归档

其它