马蹄链DAPP智能合约系统开发详细说明及方案源码

简介: Decentralized storage is a storage solution based on a blockchain decentralized network, rather than relying on a single centralized entity. Data is stored on various nodes in a distributed network, rather than on a single server under the control of a single organization.

 What is decentralized storage?

Decentralized storage is a storage solution based on a blockchain decentralized network, rather than relying on a single centralized entity. Data is stored on various nodes in a distributed network, rather than on a single server under the control of a single organization.

IPFS is a decentralized peer-to-peer file storage network that allows users to store, access, and share files in a distributed manner, providing higher security, privacy, and scalability. StorX enables anyone to securely encrypt, segment, and distribute critical data across multiple managed nodes worldwide. Each file stored on StorX is divided into multiple parts before encryption and stored on separate storage nodes run by different operators located around the world.

  def fuse_bn(self):
  search_engine=SearchableGraph(graph=self.graph)

  paths=search_engine.path_matching(

  sp_expr=lambda x:x.type in{'Conv','Gemm','ConvTranspose'},

  rp_expr=lambda x,y:False,

 
  ep_expr=lambda x:x.type=='BatchNormalization',

  direction='down')

  for path in paths:

  path=path.tolist()

  assert len(path)==2,('Oops seems we got something unexpected.')

  computing_op,bn_op=path

  assert isinstance(computing_op,Operation)and isinstance(bn_op,Operation)

  if(len(self.graph.get_downstream_operations(computing_op))!=1 or

  len(self.graph.get_upstream_operations(bn_op))!=1):

  ppq_warning(f'PPQ can not merge operation{computing_op.name}and{bn_op.name},'

  'this is not suppose to happen with your network,'

  'network with batchnorm inside might not be able to quantize and deploy.')

  continue

  assert len(bn_op.parameters)==4,'BatchNorm should have 4 parameters,namely alpha,beta,mean,var'

  alpha=bn_op.parameters[0].value

  beta=bn_op.parameters[1].value

  mean=bn_op.parameters[2].value

  var=bn_op.parameters[3].value

  epsilon=bn_op.attributes.get('epsilon',1e-5)

  if computing_op.num_of_parameter==1:

  w=computing_op.parameters[0].value#no bias.

  assert isinstance(w,torch.Tensor),'values of parameters are assumed as torch Tensor'

  if computing_op.type=='ConvTranspose':

  b=torch.zeros(w.shape[1]*computing_op.attributes.get('group',1))

  elif computing_op.type=='Gemm'and computing_op.attributes.get('transB',0)==0:

  b=torch.zeros(w.shape[1])

  else:

  b=torch.zeros(w.shape[0])

  else:

  w,b=[var.value for var in computing_op.parameters[:2]]#has bias.

  if computing_op.type=='Conv':

  #calculate new weight and bias

  scale=alpha/torch.sqrt(var+epsilon)

  w=wscale.reshape([-1]+[1](w.ndim-1))

  b=alpha*(b-mean)/torch.sqrt(var+epsilon)+beta

  elif computing_op.type=='Gemm':

  #calculate new weight and bias

  scale=alpha/torch.sqrt(var+epsilon)

  if computing_op.attributes.get('transB',0):

  w=w*scale.reshape([-1,1])

  else:

  w=w*scale.reshape([1,-1])

  b=alpha*(b-mean)/torch.sqrt(var+epsilon)+beta

  elif computing_op.type=='ConvTranspose':

  scale=alpha/torch.sqrt(var+epsilon)

  group=computing_op.attributes.get('group',1)

  scale=scale.reshape([group,1,-1,1,1])

  w=w.reshape([group,-1,w.shape[1],w.shape[2],w.shape[3]])*scale

  w=w.reshape([w.shape[0]*w.shape[1],w.shape[2],w.shape[3],w.shape[4]])

  b=alpha*(b-mean)/torch.sqrt(var+epsilon)+beta

  else:

  raise TypeError(

  f'Unexpected op type{computing_op.type}.'

  f'Can not merge{computing_op.name}with{bn_op.name}')

  #create new op and variable

  merged_op=Operation(computing_op.name,op_type=computing_op.type,

  attributes=computing_op.attributes.copy())

  weight_var=Variable(computing_op.name+'_weight',w,True,[merged_op])

  bias_var=Variable(computing_op.name+'_bias',b,True,[merged_op])

  #replace&dirty work

  input_var=computing_op.inputs[0]

  output_var=bn_op.outputs[0]

  input_var.dest_ops.remove(computing_op)

  input_var.dest_ops.append(merged_op)

  output_var.source_op=merged_op

  #delete old operations

  computing_op.inputs.pop(0)

  bn_op.outputs.clear()

  self.graph.remove_operation(computing_op)

  #insert new

  self.graph.append_operation(merged_op)

  merged_op.inputs.extend([input_var,weight_var,bias_var])

  merged_op.outputs.extend([output_var])

  self.graph.append_variable(weight_var)

  self.graph.append_variable(bias_var)

相关文章
|
安全 区块链
DAPP公链合约系统开发技术原理丨DAPP公链合约系统开发详细源码及案例
智能合约dapp系统开发是基于链游技术开发的应用程序,它利用智能合约来实现去中心化的应用。智能合约是一种程序,它可以在链游上运行,根据指定的条件自动执行。智能合约dapp系统开发的核心在于智能合约的开发,智能合约的开发需要具备一定的链游技术知识和编程技能
|
存储 监控 安全
波场链(TRON)智能合约dapp开发部署指南
波场链(TRON)智能合约dapp开发部署指南
|
JSON 前端开发 编译器
链上DAPP系统开发|DApp智能合约开发搭建技术
合约可以调用其他合约,只需知道地址和ABI,我们就可以在合约内部调用其他合约,需要注意的是,调用合约也是事务性操作,因此,你不需要通过手动管理异步操作的方式来等待返回结果。在合约内部调用其他合约需要消耗额外的Gas费用。
链上DAPP系统开发|DApp智能合约开发搭建技术
|
算法 安全 Unix
DAPP马蹄链佛萨奇2.0智能合约系统开发(案例及详细)丨DAPP马蹄链佛萨奇2.0开发智能合约源码及方案
 Web3.0通过将信息交互从屏幕转移到物理空间,改变了终端用户体验,因而也有称Web3.0为“空间网络(Spatial Web)”。
|
前端开发 JavaScript Java
马蹄链DAPP合约项目系统开发技术方案丨(源码搭建)
马蹄链DAPP合约项目系统开发技术方案丨(源码搭建)
111 0
|
区块链 数据安全/隐私保护
马蹄链DAPP合约模式系统开发技术(原理)
马蹄链DAPP合约模式系统开发技术(原理)
DAPP马蹄链系统开发(方案详解)丨DAPP马蹄链系统开发(源码项目)
  大公排指的是全网排列,小公排指的是单体伞下排列,一条线公排指的是按一条线排列,跳排指的按指定某代数为推荐关系。
|
5G 区块链 调度
DAPP马蹄链Matic智能合约系统开发详细及分析丨Matic马蹄链智能合约开发案例源码版
   5G技术可以为智慧物流提供高速、低延迟的数据传输和通信服务,实现物流的实时监控和管理。例如,在物流配送中,使用5G技术可以实现对货物的实时跟踪和配送调度,提高物流效率和准确性。
DAPP马蹄链系统开发(方案及项目)丨DAPP马蹄链系统开发(源码详情)
  Metaverse is a virtual world constructed by humans using digital technology,mapped or transcended by the real world,and can interact with the real world.It is a digital living space with a new social system.
|
区块链
DAPP马蹄链智能合约系统开发功能详情丨DAPP马蹄链智能合约开发源码部署
 DAPP是DecentralizeDAPPlication的缩写,中文叫分布式应用/去中心化应用)。通常来说,不同的DAPP会采用不同的底层区快开发平台和共识机制,或者自行发布代币(也可以使用基于相同区快平台的通用代币)。