一起架构-某实时分析项目云原生 serverless 架构的设计思路和poc代码实现

本文涉及的产品
函数计算FC,每月15万CU 3个月
简介: 一起架构-某实时分析项目云原生 serverless 架构的设计思路和poc代码实现

1. 前言 - 云原生与多云混合云的部署架构

大家好,我是明哥!

在数字化转型的大背景下,越来越多的企业不断将越来越多的应用部署到云上,应用的架构也更加倾向云原生,以支持多云和混合云的部署架构。

前段时间,笔者参与了某个实时分析项目在 AWS 上的架构设计和 POC 开发,该项目使用了 serverless 的云原生架构,在此跟大家分享下架构设计和 poc 代码的细节,希望大家喜欢。

2. 项目背景和目标 Background and goals

整个项目的背景和目标如下:

640.png


经提炼和概括,项目的背景,基本目标和额外目标如下:

  • 背景:Ingest, transform and prepare the netCDF data provided by UK Met Office, make them available for secure querying by our customer, as soon as it arrives in the S3 bucket.
  • 基本目标:Core capabilities include:
  • high availability (no downtime)
  • quick response
  • timely availability of new data.
  • 额外目标:Extra Goals:
  • Security
  • cost effectiveness

3. 整体架构图 Architecture overview

最终设计的完整的架构图如下:

640.png


4. 架构设计和技术选型 Architecture details and thought process

4.1 How to discover new available data ASAP? - SQS

  • UK Met Office prepares the original data in netCDF format and uploaded them to a S3 bucket, but as listing a bucket is both expensive and slow (file system vs object store), we can’t take this approach for quickly discover of new available data in s3;
  • We noticed that UK Met Office will also send a message to a SNS topic once new data is available in the S3 bucket, so we can use a SQS to scribe to the SNS topic, and got notified when new objects are created, this solution is latency-efficient, cost-effective and scalable;

4.2 Can we use the original S3 bucket used by the UK Met Office?

  • We noticed that the original data will be held in the bucket for 7 days after the notification is sent, then they will be deleted;
  • We can use our own S3 bucket to store the data, so we have full control of the data, including the data lifecycle, the data security policies, etc;

4.3 How to server our end users, with quick response and high availability? – API gateway + DynamoDB

  • Our end users typically ask questions like “how will the weather/humidity/temperature be like in city C1, at time T1? how about city C2 and C3? How about time T2?”, to answer that question, we have to first figure out which files in the S3 bucket contains forecast results for that specific time (all the files contains forecast results for all the cities in UK, so place should not be a problem);
  • So we can use a RDS or DynamoDB to store the metadata “which s3 file contains forecast results for which time”, then when we receive a specific question from our customer, we can first query the RDS/DynamoDB to find out the corresponding S3 file, then they can query the s3 file to get all the forecast details, including weather/humidity/temperature etc, for all the UK cities;
  • RDS is a relational database and is typically for well-formed structured data, while DynamoDB is a fully-managed Key-value NoSql data store, both can fulfill our functional requirements, but considering that we don’t have highly-structured data, and DynamoDB shines in Availability. Scalability and Performance, so we will go with DynamoDB;
  • We can use an API gateway as a proxy to the DynamoDB and answer the end user’s request directly, with out an extra lambda layer between API gateway and DynamoDB, hence the whole data pipeline is shorter, which will be more time-effective, cost-effective, and less issues will occur; Also API gateway provides many security mechanisms, including authentication, authorization,audit and encryption;
  • With api gateway, DynamoDB and S3, the whole serving layer will response quickly with high availability, and is also cost-effective and secured;

4.4 How to ingest, transform and prepare the original data - Lambda!

  • To consume messages in SQS queues, we normally follow the event-driven architecture and use streaming processing frameworks like spark streaming/flink/kafka stream, but to use them, you need to first provision ec2 servers and possibly use ecs/eks, but you need to deploy, monitor and scale(both up and down) your app all by yourself, this is cumbersome and not cost-effective;
  • You can consider using serverless Fargate, but you have to deal with the event-driven by yourself;
  • Lambda is both serverless and event-driven, it automatically scales according to your data volume, it integrates with other aws services like sqs, s3, DynamoDB, api gateway well, and it allow you to pay for what you use, so it is a perfect match for our case!
  • we can use lambda and create a sqs trigger, so right after events arrived in sqs, it will trigger the execution of lamba where we can do the transform and load into downstream DynamoDB table;

5 技术组件细节和示例代码 Component details and code samples

5.1 Component details and code samples – sqs and lambda

  • Sqs type:as there is no need for First-in-first-out message delivery and Exactly-once processing, we can stay with the standard type sqs,which offers better scalability;
  • Sqs encryption: Amazon SQS provides in-transit encryption by default, we also added at-rest encryption to our queue by enable server-side encryption (uses Amazon SQS key (SSE-SQS));
  • Lambda: lambda has an sqs trigger, and for performance consideration, we are using batch to writer into dynamodb;
  • Labmda permission: to follow the least-access polity, we created a new IAM role with basic Lambda permissions (with just polices like AWSLambdaSQSQueueExecutionRole/AWSLambdaExecute/AWSLambdaDynamoDBExecutionRole)

5.2 Component details and code samples - dynamoDB

  • dynamoDB is serverless and will auto scale based on data volume and query, so to avoid hot spot bottleneck, we used forecast_period as partition key/hash key and forecast_time as sort key;(forecast_period is the difference between forecast_reference_time and forecast_time);
  • As end users typically query based on time, so we created a secondary global indexes sgi, with partition key on the time field forecast_time;
  • Encryption: we turned on Encryption at rest, and used encryption keys stored in AWS Key Management Service, whch is managed by DynamoDB at no extra cost;
  • permission: for apis to query the dynamoDB, we followed the least-access polity and created an access control policy with only read policy on the table and index;

5.3 Component details and code samples – api gateway

  • Api: I created two methods and resources, and configured the integration request and integration response’s mapping template, to full fill the scan and query on the dynamoDB, with paths like /times and /times/{time}, the latter one will use the sgi we created for the table;
  • Api key: I configured the method request to use API Key;
  • permission: to follow the least-access polity, we created a new IAM role with only necessary permissions (with just polices like AmazonAPIGatewayPushToCloudWatchLogs, and the dynamodb read-only policy we created earlier)

5.4 Component details and code samples – lambda codes

640.png

5.5 Component details and code samples – api codes

640.png

6. 脚本与自动化 automation using script - cloudFormation

  • I believe in IaC (infrustructure As Code) and GitOps, humans will make mistakes and automation helps us on this (plus automation is more efficient and script is more repeatable);
  • So I tried to use cloudFormation template to simplify the infrastructure management (due to time constraint, I only finished the dynamodb template);
  • Below are part of the cloudFormation script for the dynamodb table creation;

640.png

7. 终端用户模拟访问效果 End user query simulation results

640.png

  • IAM user with read only permission – IAM user name: arn:aws:iam::000435319421:user/demo
  • IAM user with read only permission – IAM user password: demo123@aws
  • End user request url: https://jye2m0pw20.execute-api.us-east2.amazonaws.com/v1/times/2022-04-16T22:45:00Z
  • End user request sample path parameter: 2022-04-17T22:30:00Z/2022-04-16T22:45:00Z, etc;
  • End user request type: get
  • End user request Authorization Type: api key
  • Key: x-api-key
  • Value: kNKmXfQGNx802XU1f75Mu9vRAFBvWIdM5uT7NmHa
  • Add to: header

8. 总结 Wrap up

  • high availability (no downtime): The solution used components like sns,sqs,lambda,dynamodb,api gateway and s3, all of which are managed services which scaled well and scaled automatically, to ensure high availability (no downtime);
  • quick response: The solution used dynamoDB in the serving layer, which scales well and scales automatically, and with the careful design of hashkey,sortkey and sgi, it offers quick response time to end users;
  • timely availability of new data: The solution followed the event driven architecture, with sqs and lambda, and ensured the timely availability of new data;
  • cost effectiveness:The solution followed the server-less architecture and used aws serveless services, so we can pay only what we use, and hence is cost effective;
  • security:
  • Encryption:aws service used TLS to provide encryption between user application and the AWS service which offered data-in-motion/transit encryption, and we enabled data-at-rest encryption;
  • Authentication and Authorization:we also followed the least-access policy to create IAM roles and policyes. we also used an api key to protect our api gateway from malicious attacks
  • Audit: CloudWatch is used for the audit;
相关实践学习
【文生图】一键部署Stable Diffusion基于函数计算
本实验教你如何在函数计算FC上从零开始部署Stable Diffusion来进行AI绘画创作,开启AIGC盲盒。函数计算提供一定的免费额度供用户使用。本实验答疑钉钉群:29290019867
建立 Serverless 思维
本课程包括: Serverless 应用引擎的概念, 为开发者带来的实际价值, 以及让您了解常见的 Serverless 架构模式
相关文章
|
12天前
|
负载均衡 算法 关系型数据库
大数据大厂之MySQL数据库课程设计:揭秘MySQL集群架构负载均衡核心算法:从理论到Java代码实战,让你的数据库性能飙升!
本文聚焦 MySQL 集群架构中的负载均衡算法,阐述其重要性。详细介绍轮询、加权轮询、最少连接、加权最少连接、随机、源地址哈希等常用算法,分析各自优缺点及适用场景。并提供 Java 语言代码实现示例,助力直观理解。文章结构清晰,语言通俗易懂,对理解和应用负载均衡算法具有实用价值和参考价值。
大数据大厂之MySQL数据库课程设计:揭秘MySQL集群架构负载均衡核心算法:从理论到Java代码实战,让你的数据库性能飙升!
|
1月前
|
运维 Cloud Native 测试技术
极氪汽车云原生架构落地实践
随着极氪数字业务的飞速发展,背后的 IT 技术也在不断更新迭代。极氪极为重视客户对服务的体验,并将系统稳定性、业务功能的迭代效率、问题的快速定位和解决视为构建核心竞争力的基石。
|
20天前
|
人工智能 自然语言处理 数据可视化
两大 智能体框架 Dify vs Langchain 的全面分析,该怎么选?资深架构师 做一个彻底的解密
两大 智能体框架 Dify vs Langchain 的全面分析,该怎么选?资深架构师 做一个彻底的解密
两大 智能体框架 Dify vs Langchain 的全面分析,该怎么选?资深架构师 做一个彻底的解密
|
28天前
|
数据采集 运维 Serverless
云函数采集架构:Serverless模式下的动态IP与冷启动优化
本文探讨了在Serverless架构中使用云函数进行网页数据采集的挑战与解决方案。针对动态IP、冷启动及目标网站反爬策略等问题,提出了动态代理IP、请求头优化、云函数预热及容错设计等方法。通过网易云音乐歌曲信息采集案例,展示了如何结合Python代码实现高效的数据抓取,包括搜索、歌词与评论的获取。此方案不仅解决了传统采集方式在Serverless环境下的局限,还提升了系统的稳定性和性能。
|
16天前
|
存储 运维 Serverless
千万级数据秒级响应!碧桂园基于 EMR Serverless StarRocks 升级存算分离架构实践
碧桂园服务通过引入 EMR Serverless StarRocks 存算分离架构,解决了海量数据处理中的资源利用率低、并发能力不足等问题,显著降低了硬件和运维成本。实时查询性能提升8倍,查询出错率减少30倍,集群数据 SLA 达99.99%。此次技术升级不仅优化了用户体验,还结合AI打造了“一看”和“—问”智能场景助力精准决策与风险预测。
148 69
|
2月前
|
存储 缓存 Cloud Native
云原生时代的架构革新,Apache Doris 存算分离如何实现弹性与性能双重提升
随着云基础设施的成熟,Apache Doris 3.0 正式支持了存算分离全新模式。基于这一架构,能够实现更低成本、极致弹性以及负载隔离。本文将介绍存算分离架构及其优势,并通过导入性能、查询性能、资源成本的测试,直观展现存算分离架构下的性能表现,为读者提供具体场景下的使用参考。
云原生时代的架构革新,Apache Doris 存算分离如何实现弹性与性能双重提升
|
1月前
|
人工智能 自然语言处理 安全
基于LlamaIndex实现CodeAct Agent:代码执行工作流的技术架构与原理
CodeAct是一种先进的AI辅助系统范式,深度融合自然语言处理与代码执行能力。通过自定义代码执行代理,开发者可精准控制代码生成、执行及管理流程。本文基于LlamaIndex框架构建CodeAct Agent,解析其技术架构,包括代码执行环境、工作流定义系统、提示工程机制和状态管理系统。同时探讨安全性考量及应用场景,如软件开发、数据科学和教育领域。未来发展方向涵盖更精细的代码生成、多语言支持及更强的安全隔离机制,推动AI辅助编程边界拓展。
74 3
基于LlamaIndex实现CodeAct Agent:代码执行工作流的技术架构与原理
|
1月前
|
Cloud Native Serverless 流计算
云原生时代的应用架构演进:从微服务到 Serverless 的阿里云实践
云原生技术正重塑企业数字化转型路径。阿里云作为亚太领先云服务商,提供完整云原生产品矩阵:容器服务ACK优化启动速度与镜像分发效率;MSE微服务引擎保障高可用性;ASM服务网格降低资源消耗;函数计算FC突破冷启动瓶颈;SAE重新定义PaaS边界;PolarDB数据库实现存储计算分离;DataWorks简化数据湖构建;Flink实时计算助力风控系统。这些技术已在多行业落地,推动效率提升与商业模式创新,助力企业在数字化浪潮中占据先机。
125 12
|
2月前
|
存储 数据采集 机器学习/深度学习
新闻聚合项目:多源异构数据的采集与存储架构
本文探讨了新闻聚合项目中数据采集的技术挑战与解决方案,指出单纯依赖抓取技术存在局限性。通过代理IP、Cookie和User-Agent的精细设置,可有效提高采集策略;但多源异构数据的清洗与存储同样关键,需结合智能化算法处理语义差异。正反方围绕技术手段的有效性和局限性展开讨论,最终强调综合运用代理技术与智能数据处理的重要性。未来,随着机器学习和自然语言处理的发展,新闻聚合将实现更高效的热点捕捉与信息传播。附带的代码示例展示了如何从多个中文新闻网站抓取数据并统计热点关键词。
110 2
新闻聚合项目:多源异构数据的采集与存储架构
|
2月前
|
前端开发 JavaScript API
体育赛事即时比分 分析页面的开发技术架构与实现细节
本文基于“体育即时比分系统”开发经验总结,分享技术实现细节。系统通过后端(ThinkPHP)、前端(Vue.js)、移动端(Android/iOS)协同工作,解决实时比分更新、赔率同步及赛事分析展示等问题。前端采用 Vue.js 结合 WebSocket 实现数据推送,提升用户体验;后端提供 API 支持比赛数据调用;移动端分别使用 Java 和 Objective-C 实现跨平台功能。代码示例涵盖比赛分析页面、API 接口及移动端数据加载逻辑,为同类项目开发提供参考。

热门文章

最新文章