admin管理员组文章数量:1318336
I am using WordPress framework and working on dedicated server from name-cheap and only one site is running on this server. Even after that I'm getting waterfall time in the range of 500ms, but I want to make it around 100ms. This is my website (/) and the waterfall You can see that everything is perfect from my end but still not getting some solution. Also can check / This website is also in WordPress and using same theme, even I am calling very less images or blogs on my index page even I'm error a huge waterfall. Want to know, how to solve all these and to know where is the problem either in WordPress or in theme or in host. Completely got stuck and no solution from last few weeks. Your help will be highly appreciated. Thank you.
I am using WordPress framework and working on dedicated server from name-cheap and only one site is running on this server. Even after that I'm getting waterfall time in the range of 500ms, but I want to make it around 100ms. This is my website (http://ucbrowserdownload/) and the waterfall You can see that everything is perfect from my end but still not getting some solution. Also can check http://labnol/ This website is also in WordPress and using same theme, even I am calling very less images or blogs on my index page even I'm error a huge waterfall. Want to know, how to solve all these and to know where is the problem either in WordPress or in theme or in host. Completely got stuck and no solution from last few weeks. Your help will be highly appreciated. Thank you.
Share Improve this question asked May 30, 2016 at 11:08 GnzietGnziet 3556 silver badges17 bronze badges 3- i don't know why people give negative rate , in my view they don't know the answer. – Gnziet Commented May 30, 2016 at 11:12
- Sometimes, the google pagespeed insights tool provides some good hints. Pay attention to the order of loading of resources (some resources block rendering until loaded). – YvesLeBorg Commented May 30, 2016 at 11:22
- 1 That is basically correct. They can't know the answer, cause there are about 12831239 reasons for your website do be slow. All we can see is that your server is slow and needs about 400ms just to generate your 4kb main page. You should try general performance tips, check out google page insights, apache performance options and wordpress optimization (and probably many more, see google). – Solarflare Commented May 30, 2016 at 11:36
1 Answer
Reset to default 8Original Source
Optimization of Nginx
Optimal Nginx configuration presented in this article. Once again briefly go through the already known parameters and add some new ones that directly affect TTFB.
pounds
First we need to define the number of "workers" Nginx. worker_processes Nginx Each workflow is able to handle many connections and is linked to the physical processor cores. If you know exactly how many cores in your server, you can specify the number yourself, or trust Nginx:
worker_processes auto; # Determination of the number of working processes
In addition, you must specify the number of connections:
worker_connections 1024; # Quantification of pounds by one working process, ranging from 1024 to 4096
requests
To the Web server can process the maximum number of requests, it is necessary to use a switched off by default directive multi_accept :
multi_accept on; # Workflows will accept all connections
It is noteworthy that the function will be useful only if a large number of requests simultaneously. If the request is not so much, it makes sense to optimize work processes, so that they did not work in vain:
accept_mutex on; # Workflows will take turns Connection
Improving TTFB and server response time depends on the directives tcp_nodelay and tcp_nopush :
on tcp_nodelay; tcp_nopush on; # Activate directives tcp_nodelay and tcp_nopush
If you do not go into too much detail, the two functions allow you to disable certain features of the TCP, which were relevant in the 90s, when the Internet was just gaining momentum, but do not make sense in the modern world. The first directive sends the data as soon as they are available (bypass the Nagle algorithm). The second allows you to send a header response (Web page) and the beginning of the file, waiting for filling the package (ie, includes TCP_CORK ). So the browser can start displaying the web page before.
At first glance, the functions are contradictory. Therefore, the directive tcp_nopush should be used in conjunction with the sendfile . In this case, the packets are filled prior to shipment, as directive is much faster and more optimal than the method of the read + the write . After the package is full, Nginx automatically disables tcp_nopush , and tcp_nodelay causes the socket to send the data. Enable sendfile is very simple:
sendfile on; # Enable more effective, pared to read + write, file sending method
So the bination of all three Directives reduces the load on the network and speeds the sending of files.
Buffers
Another important optimization affects the size of the buffer - if they are too small, Nginx will often refer to the disks are too big - will quickly fill up the RAM. Nginx Buffers To do this, you need to set up four directives. Client_body_buffer_size and client_header_buffer_size set the buffer size for the body and read the client request header, respectively. Of client_max_body_size sets the maximum size of the client request, and large_client_header_buffers specifies the maximum number and size of buffers to read large request headers.
The optimal buffer settings will look like this:
10K client_body_buffer_size; client_header_buffer_size 1k; of client_max_body_size 8m; large_client_header_buffers 2 1k; # 10k buffer size on the body of the request, 1 KB per title, 8MB to the query buffer and 2 to read large headlines
Timeouts and keepalive
Proper configuration of standby time and keepalive can also significantly improve server responsiveness.
Directive client_body_timeout and client_header_timeout set time delay on the body and reading the request header:
client_body_timeout 10; client_header_timeout 10; # Set the waiting time in seconds
In the case of lack of response from the client using reset_timedout_connection you can specify Nginx disable such pounds:
reset_timedout_connection on; # Disable connections timed-out
Directive keepalive_timeout sets the wait time before the stop connection and keepalive_requests limits the number of keepalive-requests from the same client:
keepalive_timeout 30; keepalive_requests 100; # Set the timeout to 30 and limitations 100 on client requests
Well send_timeout sets the wait time in the transmission response between two write operations:
send_timeout 2; # Nginx will wait for an answer 2
Caching
Enable caching significantly improve server response time. Nginx cache Methods are laid out in more detail in the material about caching with Nginx, but in this case the inclusion of important cache-control . Nginx is able to send a request to redkoizmenyaemyh caching data, which are often used on the client side. To do this, the server section you want to add a line:
. Location ~ * (jpg | jpeg | png | gif | ico | css | js) $ {expires 365d;}
Targets file formats and duration Cache
Also it does not hurt to cache information about monly used files:
open_file_cache max = 10000 = 20s the inactive; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # Enables the cache tags 10 000 files in 30 seconds
open_file_cache specifies the maximum number of files for which information is stored, and the storage time. open_file_cache_valid sets the time after which you need to check the relevance of the information, open_file_cache_min_uses specifies the minimum number of references to the file on the part of customers and open_file_cache_errors includes caching troubleshooting files.
logging
This is another feature that can significantly reduce the performance of the entire server and, accordingly, the response time and TTFB. So the best solution is to disable basic log and store information about critical errors only:
off the access_log; the error_log /var/log/nginx/error.log crit; # Turn off the main logging
Gzip pression
Usefulness Gzip is difficult to overstate. Compression can significantly reduce traffic and relieve the channel. But he has a downside - need to press time. So it will have to turn off to improve TTFB and server response time. Gzip At this stage, we can not remend Gzip off as pression improves the Time To Last Byte, ie, the time required for a full page load. And this is in most cases a more important parameter. On TTFB and improving server response time greatly affect large-scale implementation of HTTP / 2 , which contains a built-in methods for header pression and multiplexing. So that in the future may disable Gzip will not be as prominent as it is now.
PHP Optimization: FastCGI in Nginx
All sites use modern server technology. PHP, for example, which is also important to optimize . Typically, PHP opens a file, verifies and piles the code, then executes. Such files and processes can be set, so PHP can cache the result for redkoizmenyaemyh files using OPcache module. And Nginx, connected to PHP using FastCGI module can store the result of the PHP script to send the user an instant.
The most important
Optimization of resources and the correct settings for the web server - the main influencing TTFB and server response time factors. Also do not forget about regular software updates to the stable release, which are to optimize and improve performance.
本文标签: javascripthow to reduce time of first byteStack Overflow
版权声明:本文标题:javascript - how to reduce time of first byte - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1742043503a2417656.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论