马蹄链DAPP智能合约系统开发详细说明及方案源码

简介: Decentralized storage is a storage solution based on a blockchain decentralized network, rather than relying on a single centralized entity. Data is stored on various nodes in a distributed network, rather than on a single server under the control of a single organization.

 What is decentralized storage?

Decentralized storage is a storage solution based on a blockchain decentralized network, rather than relying on a single centralized entity. Data is stored on various nodes in a distributed network, rather than on a single server under the control of a single organization.

IPFS is a decentralized peer-to-peer file storage network that allows users to store, access, and share files in a distributed manner, providing higher security, privacy, and scalability. StorX enables anyone to securely encrypt, segment, and distribute critical data across multiple managed nodes worldwide. Each file stored on StorX is divided into multiple parts before encryption and stored on separate storage nodes run by different operators located around the world.

  def fuse_bn(self):
  search_engine=SearchableGraph(graph=self.graph)

  paths=search_engine.path_matching(

  sp_expr=lambda x:x.type in{'Conv','Gemm','ConvTranspose'},

  rp_expr=lambda x,y:False,

 
  ep_expr=lambda x:x.type=='BatchNormalization',

  direction='down')

  for path in paths:

  path=path.tolist()

  assert len(path)==2,('Oops seems we got something unexpected.')

  computing_op,bn_op=path

  assert isinstance(computing_op,Operation)and isinstance(bn_op,Operation)

  if(len(self.graph.get_downstream_operations(computing_op))!=1 or

  len(self.graph.get_upstream_operations(bn_op))!=1):

  ppq_warning(f'PPQ can not merge operation{computing_op.name}and{bn_op.name},'

  'this is not suppose to happen with your network,'

  'network with batchnorm inside might not be able to quantize and deploy.')

  continue

  assert len(bn_op.parameters)==4,'BatchNorm should have 4 parameters,namely alpha,beta,mean,var'

  alpha=bn_op.parameters[0].value

  beta=bn_op.parameters[1].value

  mean=bn_op.parameters[2].value

  var=bn_op.parameters[3].value

  epsilon=bn_op.attributes.get('epsilon',1e-5)

  if computing_op.num_of_parameter==1:

  w=computing_op.parameters[0].value#no bias.

  assert isinstance(w,torch.Tensor),'values of parameters are assumed as torch Tensor'

  if computing_op.type=='ConvTranspose':

  b=torch.zeros(w.shape[1]*computing_op.attributes.get('group',1))

  elif computing_op.type=='Gemm'and computing_op.attributes.get('transB',0)==0:

  b=torch.zeros(w.shape[1])

  else:

  b=torch.zeros(w.shape[0])

  else:

  w,b=[var.value for var in computing_op.parameters[:2]]#has bias.

  if computing_op.type=='Conv':

  #calculate new weight and bias

  scale=alpha/torch.sqrt(var+epsilon)

  w=wscale.reshape([-1]+[1](w.ndim-1))

  b=alpha*(b-mean)/torch.sqrt(var+epsilon)+beta

  elif computing_op.type=='Gemm':

  #calculate new weight and bias

  scale=alpha/torch.sqrt(var+epsilon)

  if computing_op.attributes.get('transB',0):

  w=w*scale.reshape([-1,1])

  else:

  w=w*scale.reshape([1,-1])

  b=alpha*(b-mean)/torch.sqrt(var+epsilon)+beta

  elif computing_op.type=='ConvTranspose':

  scale=alpha/torch.sqrt(var+epsilon)

  group=computing_op.attributes.get('group',1)

  scale=scale.reshape([group,1,-1,1,1])

  w=w.reshape([group,-1,w.shape[1],w.shape[2],w.shape[3]])*scale

  w=w.reshape([w.shape[0]*w.shape[1],w.shape[2],w.shape[3],w.shape[4]])

  b=alpha*(b-mean)/torch.sqrt(var+epsilon)+beta

  else:

  raise TypeError(

  f'Unexpected op type{computing_op.type}.'

  f'Can not merge{computing_op.name}with{bn_op.name}')

  #create new op and variable

  merged_op=Operation(computing_op.name,op_type=computing_op.type,

  attributes=computing_op.attributes.copy())

  weight_var=Variable(computing_op.name+'_weight',w,True,[merged_op])

  bias_var=Variable(computing_op.name+'_bias',b,True,[merged_op])

  #replace&dirty work

  input_var=computing_op.inputs[0]

  output_var=bn_op.outputs[0]

  input_var.dest_ops.remove(computing_op)

  input_var.dest_ops.append(merged_op)

  output_var.source_op=merged_op

  #delete old operations

  computing_op.inputs.pop(0)

  bn_op.outputs.clear()

  self.graph.remove_operation(computing_op)

  #insert new

  self.graph.append_operation(merged_op)

  merged_op.inputs.extend([input_var,weight_var,bias_var])

  merged_op.outputs.extend([output_var])

  self.graph.append_variable(weight_var)

  self.graph.append_variable(bias_var)

相关文章
|
安全 区块链
DAPP公链合约系统开发技术原理丨DAPP公链合约系统开发详细源码及案例
智能合约dapp系统开发是基于链游技术开发的应用程序,它利用智能合约来实现去中心化的应用。智能合约是一种程序,它可以在链游上运行,根据指定的条件自动执行。智能合约dapp系统开发的核心在于智能合约的开发,智能合约的开发需要具备一定的链游技术知识和编程技能
|
存储 算法 区块链
DAPP智能合约系统软件开发案例 | 币安智能链模式系统开发
币安链和其它许多项目类似,比如EOS。它具有高吞吐量和高性能的底层匹配引擎,可以同时迅速的支持和处理大量交易。但是不够灵活性,无法支持许多复杂的DAPP。
|
存储 安全 算法
dapp公链游戏链上合约系统开发技术详细/案例开发/方案逻辑/成熟技术
  随着区块链技术的不断发展和普及,越来越多的游戏开发商开始将区块链技术应用于游戏中。区块链游戏系统开发可以带来许多好处,例如提高游戏的安全性、透明度、公正性等,同时还可以让玩家拥有更好的游戏体
|
存储 开发框架 前端开发
BSC链Defiswap丨IPPswap丨NFTswap丨OMNIswap智能合约去中心化项目系统开发成熟技术/项目案例/源码说明
  区块链是一个分布式账本,使用密码学原理来记录数据,并且按照时间顺序进行记录。在区块链中,数据可以进行高度地分散,因为数据分布在不同的节点上。当一个区块链被添加到一个新的区块上时,它将包含以前的所有交易记录。
|
存储 监控 安全
波场链(TRON)智能合约dapp开发部署指南
波场链(TRON)智能合约dapp开发部署指南
|
算法 安全 Unix
DAPP马蹄链佛萨奇2.0智能合约系统开发(案例及详细)丨DAPP马蹄链佛萨奇2.0开发智能合约源码及方案
 Web3.0通过将信息交互从屏幕转移到物理空间,改变了终端用户体验,因而也有称Web3.0为“空间网络(Spatial Web)”。
|
前端开发 JavaScript Java
马蹄链DAPP合约项目系统开发技术方案丨(源码搭建)
马蹄链DAPP合约项目系统开发技术方案丨(源码搭建)
113 0
|
区块链 数据安全/隐私保护
马蹄链DAPP合约模式系统开发技术(原理)
马蹄链DAPP合约模式系统开发技术(原理)
DAPP马蹄链系统开发(方案详解)丨DAPP马蹄链系统开发(源码项目)
  大公排指的是全网排列,小公排指的是单体伞下排列,一条线公排指的是按一条线排列,跳排指的按指定某代数为推荐关系。
|
5G 区块链 调度
DAPP马蹄链Matic智能合约系统开发详细及分析丨Matic马蹄链智能合约开发案例源码版
   5G技术可以为智慧物流提供高速、低延迟的数据传输和通信服务,实现物流的实时监控和管理。例如,在物流配送中,使用5G技术可以实现对货物的实时跟踪和配送调度,提高物流效率和准确性。