马蹄链DAPP智能合约系统开发详细说明及方案源码

简介: Decentralized storage is a storage solution based on a blockchain decentralized network, rather than relying on a single centralized entity. Data is stored on various nodes in a distributed network, rather than on a single server under the control of a single organization.

 What is decentralized storage?

Decentralized storage is a storage solution based on a blockchain decentralized network, rather than relying on a single centralized entity. Data is stored on various nodes in a distributed network, rather than on a single server under the control of a single organization.

IPFS is a decentralized peer-to-peer file storage network that allows users to store, access, and share files in a distributed manner, providing higher security, privacy, and scalability. StorX enables anyone to securely encrypt, segment, and distribute critical data across multiple managed nodes worldwide. Each file stored on StorX is divided into multiple parts before encryption and stored on separate storage nodes run by different operators located around the world.

  def fuse_bn(self):
  search_engine=SearchableGraph(graph=self.graph)

  paths=search_engine.path_matching(

  sp_expr=lambda x:x.type in{'Conv','Gemm','ConvTranspose'},

  rp_expr=lambda x,y:False,

 
  ep_expr=lambda x:x.type=='BatchNormalization',

  direction='down')

  for path in paths:

  path=path.tolist()

  assert len(path)==2,('Oops seems we got something unexpected.')

  computing_op,bn_op=path

  assert isinstance(computing_op,Operation)and isinstance(bn_op,Operation)

  if(len(self.graph.get_downstream_operations(computing_op))!=1 or

  len(self.graph.get_upstream_operations(bn_op))!=1):

  ppq_warning(f'PPQ can not merge operation{computing_op.name}and{bn_op.name},'

  'this is not suppose to happen with your network,'

  'network with batchnorm inside might not be able to quantize and deploy.')

  continue

  assert len(bn_op.parameters)==4,'BatchNorm should have 4 parameters,namely alpha,beta,mean,var'

  alpha=bn_op.parameters[0].value

  beta=bn_op.parameters[1].value

  mean=bn_op.parameters[2].value

  var=bn_op.parameters[3].value

  epsilon=bn_op.attributes.get('epsilon',1e-5)

  if computing_op.num_of_parameter==1:

  w=computing_op.parameters[0].value#no bias.

  assert isinstance(w,torch.Tensor),'values of parameters are assumed as torch Tensor'

  if computing_op.type=='ConvTranspose':

  b=torch.zeros(w.shape[1]*computing_op.attributes.get('group',1))

  elif computing_op.type=='Gemm'and computing_op.attributes.get('transB',0)==0:

  b=torch.zeros(w.shape[1])

  else:

  b=torch.zeros(w.shape[0])

  else:

  w,b=[var.value for var in computing_op.parameters[:2]]#has bias.

  if computing_op.type=='Conv':

  #calculate new weight and bias

  scale=alpha/torch.sqrt(var+epsilon)

  w=wscale.reshape([-1]+[1](w.ndim-1))

  b=alpha*(b-mean)/torch.sqrt(var+epsilon)+beta

  elif computing_op.type=='Gemm':

  #calculate new weight and bias

  scale=alpha/torch.sqrt(var+epsilon)

  if computing_op.attributes.get('transB',0):

  w=w*scale.reshape([-1,1])

  else:

  w=w*scale.reshape([1,-1])

  b=alpha*(b-mean)/torch.sqrt(var+epsilon)+beta

  elif computing_op.type=='ConvTranspose':

  scale=alpha/torch.sqrt(var+epsilon)

  group=computing_op.attributes.get('group',1)

  scale=scale.reshape([group,1,-1,1,1])

  w=w.reshape([group,-1,w.shape[1],w.shape[2],w.shape[3]])*scale

  w=w.reshape([w.shape[0]*w.shape[1],w.shape[2],w.shape[3],w.shape[4]])

  b=alpha*(b-mean)/torch.sqrt(var+epsilon)+beta

  else:

  raise TypeError(

  f'Unexpected op type{computing_op.type}.'

  f'Can not merge{computing_op.name}with{bn_op.name}')

  #create new op and variable

  merged_op=Operation(computing_op.name,op_type=computing_op.type,

  attributes=computing_op.attributes.copy())

  weight_var=Variable(computing_op.name+'_weight',w,True,[merged_op])

  bias_var=Variable(computing_op.name+'_bias',b,True,[merged_op])

  #replace&dirty work

  input_var=computing_op.inputs[0]

  output_var=bn_op.outputs[0]

  input_var.dest_ops.remove(computing_op)

  input_var.dest_ops.append(merged_op)

  output_var.source_op=merged_op

  #delete old operations

  computing_op.inputs.pop(0)

  bn_op.outputs.clear()

  self.graph.remove_operation(computing_op)

  #insert new

  self.graph.append_operation(merged_op)

  merged_op.inputs.extend([input_var,weight_var,bias_var])

  merged_op.outputs.extend([output_var])

  self.graph.append_variable(weight_var)

  self.graph.append_variable(bias_var)

相关文章
|
运维 Ubuntu Linux
【树莓派4B安装18.04桌面+远程SSH】
【树莓派4B安装18.04桌面+远程SSH】
1095 0
|
缓存 数据可视化 JavaScript
draw-io
Draw.io是一个可配置的图表/白板可视化应用程序。draw.io 由英国软件公司JGraph Ltd拥有和开发。这是一个开源项目(但对贡献是封闭的)可以绘制流程图、UML、类图、组织结构图、泳道图、E-R图、思维导图等
1374 0
draw-io
|
安全 网络安全
MarkdownPad 文件访问权限受限导致软件打开后不久闪退解决方法
【8月更文挑战第31天】如果MarkdownPad因权限受限而闪退,可尝试:1)以管理员身份运行;2)检查并修改文件权限,确保有读写权限;3)关闭可能干扰的杀毒软件或防火墙;4)卸载后重新安装,注意选择合适路径并以管理员身份安装。
474 6
|
运维 前端开发 JavaScript
SpringBoot+Vue打造公司货物订单管理系统
SpringBoot+Vue打造公司货物订单管理系统
737 0
SpringBoot+Vue打造公司货物订单管理系统
|
算法 安全 PHP
24-9-24-CTFweb爆破-学习笔记
本文档详细记录了CTFshow平台上的Web安全挑战学习过程,包括条件爆破、伪随机数爆破及目录遍历等多种攻击手法。通过分析PHP的`substr()`函数与MD5加密特性,实现对特定条件的token爆破;利用Mersenne Twister算法的伪随机数生成机制破解随机数挑战;通过身份证信息爆破获取账户密码;最后通过目录遍历技术找到隐藏的flag。提供了完整的脚本示例,帮助读者理解和实践各种爆破技巧。
探索SPI单线传输模式:时钟线与数据传输的简化之道
SPI单线传输模式简化了微控制器与设备间的通信,仅使用MOSI线减少线路,降低成本和复杂性。时钟线SCLK在同步数据传输中仍关键,确保数据准确。虽限制了从机回传数据,但适合需要简化设计的应用。在选择设备时,注意其是否真正支持单线模式并保持同步性。随着技术进步,单线SPI将在未来继续发展。
659 1
|
Windows
Anaconda——安装及基本使用
Anaconda——安装及基本使用
739 0
|
缓存 NoSQL Java
构建高性能微服务架构:Java后端的实践之路
【5月更文挑战第5天】在当今快速迭代和高并发需求的软件开发领域,微服务架构因其灵活性、可扩展性而受到青睐。本文将深入探讨如何在Java后端环境中构建一个高性能的微服务系统,涵盖关键的设计原则、常用的框架选择以及性能优化技巧。我们将重点讨论如何通过合理的服务划分、高效的数据存储策略、智能的缓存机制以及有效的负载均衡技术来提升整体系统的响应速度和处理能力。
|
存储 分布式计算 Apache
阿里云 EMR 基于 Paimon 和 Hudi 构建 Streaming Lakehouse
Apache Paimon 和 Apache Hudi 作为数据湖存储格式,有着高吞吐的写入和低延迟的查询性能,是构建数据湖的常用组件。本文在阿里云EMR上,针对数据实时入湖场景,对 Paimon 和 Hudi 的性能进行比对,并分别以 Paimon 和 Hudi 作为统一存储搭建准实时数仓。
65742 1
阿里云 EMR 基于 Paimon 和 Hudi 构建 Streaming Lakehouse
|
数据采集 机器学习/深度学习 数据可视化
python——pycharm进行统计建模
python——pycharm进行统计建模
485 0