Demystifying Cloud Latency

简介: In the days before the ubiquitous Internet, understanding latency was relatively straightforward. You simply counted the number of router hops between


In the days before the ubiquitous Internet, understanding latency was relatively straightforward. You simply counted the number of router hops between you and your application. Network latency was essentially the delays that the data packets experienced when travelling from the source, past the hops to your application.

Large enterprises had this largely under their control. You would own most if not all the routers. There would be network delays, but these were measurable and predictable, allowing you to improve on it while setting expectations. The internet changed this. In a shared, off-premise infrastructure, calculating network latency is now complex. The subtleties, especially those involving the cloud service providers’ infrastructure and your link to the data center, play a huge rule. And they can impact latency in ways we readily do not appreciate. At the same time, managing latency is becoming crucial. As more users live and breathe technology, they believe fast connectivity is a given. With consumers having easy access to high-speed broadband over wire or wireless, they expect enterprise networks to be of the same vein.

Cloud has made the subject even more pressing. As many enterprises look to benefit from public shared infrastructures for cost-efficiency, scalability and agility, they are shifting their in-house server-oriented IT infrastructure into a network-oriented one that is often managed and hosted by a service provider. With the rise of machine-to-machine decision making, automation, cognitive computing and high-speed businesses like high-frequency trading easier, network latency is in the spotlight, with adoption, reputation, revenues and customer satisfaction now tied to it.

As applications become latency sensitive, especially with near zero tolerance to lag and delays by end users, application development is also influenced by network latency.

note:If you want to learn more, please download the attachment.

目录
相关文章
Query Performance Optimization at Alibaba Cloud Log Analytics Service
PrestoCon Day 2023,链接:https://prestoconday2023.sched.com/event/1Mjdc?iframe=no首页自我介绍,分享题目概要各个性能优化项能够优化的资源类别limit快速短路有什么优点?有啥特征?进一步的优化空间?避免不必要块的生成逻辑单元分布式执行,global 阶段的算子哪些字段无需输出?公共子表达式结合FilterNode和Proje
Query Performance Optimization at Alibaba Cloud Log Analytics Service
|
存储 负载均衡 算法
Cluster Topic | Cloud computing (FREE)
云计算 Cluster 题目(试读)
181 0
|
存储 负载均衡 网络性能优化
|
监控 网络协议 安全
Security Topic | Cloud computing (FREE)
云计算 Security 习题(试读)
109 0
SAP cloud platform 504 gateway time out Cloud connector
SAP cloud platform 504 gateway time out Cloud connector
SAP cloud platform 504 gateway time out Cloud connector
|
应用服务中间件 Apache nginx
理解Latency和Throughput。
Latency,中文译作延迟。Throughput,中文译作吞吐量。它们是衡量软件系统的最常见的两个指标。
2017 0
|
弹性计算 安全 关系型数据库
Deploy Web Apps with High Availability, Fault Tolerance, and Load Balancing on Alibaba Cloud
High Availability (HA), Fault Tolerance (FT), and Horizontal Scale Friendly (HSF) are as equally important as to functionality for web applications to run and succeed today.
4000 0
|
消息中间件 网络性能优化 RocketMQ
Low-Latency Distributed Messaging with RocketMQ – Part 2
In part two of this article series, we discuss in detail about the technologies and algorithms of RocketMQ.
1922 0
Low-Latency Distributed Messaging with RocketMQ – Part 2
|
消息中间件 Java Linux
Low-Latency Distributed Messaging with RocketMQ – Part 1
In this two-article series, we discuss about the technology and evolution of RocketMQ, an open source platform developed by Alibaba Cloud.
2018 0
Low-Latency Distributed Messaging with RocketMQ – Part 1