Demystifying Cloud Latency

简介: In the days before the ubiquitous Internet, understanding latency was relatively straightforward. You simply counted the number of router hops between


In the days before the ubiquitous Internet, understanding latency was relatively straightforward. You simply counted the number of router hops between you and your application. Network latency was essentially the delays that the data packets experienced when travelling from the source, past the hops to your application.

Large enterprises had this largely under their control. You would own most if not all the routers. There would be network delays, but these were measurable and predictable, allowing you to improve on it while setting expectations. The internet changed this. In a shared, off-premise infrastructure, calculating network latency is now complex. The subtleties, especially those involving the cloud service providers’ infrastructure and your link to the data center, play a huge rule. And they can impact latency in ways we readily do not appreciate. At the same time, managing latency is becoming crucial. As more users live and breathe technology, they believe fast connectivity is a given. With consumers having easy access to high-speed broadband over wire or wireless, they expect enterprise networks to be of the same vein.

Cloud has made the subject even more pressing. As many enterprises look to benefit from public shared infrastructures for cost-efficiency, scalability and agility, they are shifting their in-house server-oriented IT infrastructure into a network-oriented one that is often managed and hosted by a service provider. With the rise of machine-to-machine decision making, automation, cognitive computing and high-speed businesses like high-frequency trading easier, network latency is in the spotlight, with adoption, reputation, revenues and customer satisfaction now tied to it.

As applications become latency sensitive, especially with near zero tolerance to lag and delays by end users, application development is also influenced by network latency.

note:If you want to learn more, please download the attachment.

目录
相关文章
Query Performance Optimization at Alibaba Cloud Log Analytics Service
PrestoCon Day 2023,链接:https://prestoconday2023.sched.com/event/1Mjdc?iframe=no首页自我介绍,分享题目概要各个性能优化项能够优化的资源类别limit快速短路有什么优点?有啥特征?进一步的优化空间?避免不必要块的生成逻辑单元分布式执行,global 阶段的算子哪些字段无需输出?公共子表达式结合FilterNode和Proje
Query Performance Optimization at Alibaba Cloud Log Analytics Service
|
存储 负载均衡 算法
Cluster Topic | Cloud computing (FREE)
云计算 Cluster 题目(试读)
199 0
|
存储 负载均衡 网络性能优化
|
监控 网络协议 安全
Security Topic | Cloud computing (FREE)
云计算 Security 习题(试读)
122 0
|
负载均衡 大数据 Linux
|
存储 弹性计算 API
Hype Cycle for Cloud Platform Services, 2022 -- Gartner
Hype Cycle for Cloud Platform Services, 2022 -- Gartner
279 0
Hype Cycle for Cloud Platform Services, 2022 -- Gartner
|
SQL XML 数据格式
Q&A – High CPU Usage on Alibaba Cloud SQL Server
A primary issue with SQL Server is its sensitivity to latency, often resulting in performance issues.
1780 0
Q&A – High CPU Usage on Alibaba Cloud SQL Server
|
SQL 关系型数据库 RDS
Troubleshooting High CPU Usage on Alibaba Cloud SQL Server
A primary issue with SQL Server is its sensitivity to latency, often resulting in performance issues.
1364 0
Troubleshooting High CPU Usage on Alibaba Cloud SQL Server