马蹄链DAPP智能合约系统开发详细说明及方案源码

简介: Decentralized storage is a storage solution based on a blockchain decentralized network, rather than relying on a single centralized entity. Data is stored on various nodes in a distributed network, rather than on a single server under the control of a single organization.

 What is decentralized storage?

Decentralized storage is a storage solution based on a blockchain decentralized network, rather than relying on a single centralized entity. Data is stored on various nodes in a distributed network, rather than on a single server under the control of a single organization.

IPFS is a decentralized peer-to-peer file storage network that allows users to store, access, and share files in a distributed manner, providing higher security, privacy, and scalability. StorX enables anyone to securely encrypt, segment, and distribute critical data across multiple managed nodes worldwide. Each file stored on StorX is divided into multiple parts before encryption and stored on separate storage nodes run by different operators located around the world.

  def fuse_bn(self):
  search_engine=SearchableGraph(graph=self.graph)

  paths=search_engine.path_matching(

  sp_expr=lambda x:x.type in{'Conv','Gemm','ConvTranspose'},

  rp_expr=lambda x,y:False,

 
  ep_expr=lambda x:x.type=='BatchNormalization',

  direction='down')

  for path in paths:

  path=path.tolist()

  assert len(path)==2,('Oops seems we got something unexpected.')

  computing_op,bn_op=path

  assert isinstance(computing_op,Operation)and isinstance(bn_op,Operation)

  if(len(self.graph.get_downstream_operations(computing_op))!=1 or

  len(self.graph.get_upstream_operations(bn_op))!=1):

  ppq_warning(f'PPQ can not merge operation{computing_op.name}and{bn_op.name},'

  'this is not suppose to happen with your network,'

  'network with batchnorm inside might not be able to quantize and deploy.')

  continue

  assert len(bn_op.parameters)==4,'BatchNorm should have 4 parameters,namely alpha,beta,mean,var'

  alpha=bn_op.parameters[0].value

  beta=bn_op.parameters[1].value

  mean=bn_op.parameters[2].value

  var=bn_op.parameters[3].value

  epsilon=bn_op.attributes.get('epsilon',1e-5)

  if computing_op.num_of_parameter==1:

  w=computing_op.parameters[0].value#no bias.

  assert isinstance(w,torch.Tensor),'values of parameters are assumed as torch Tensor'

  if computing_op.type=='ConvTranspose':

  b=torch.zeros(w.shape[1]*computing_op.attributes.get('group',1))

  elif computing_op.type=='Gemm'and computing_op.attributes.get('transB',0)==0:

  b=torch.zeros(w.shape[1])

  else:

  b=torch.zeros(w.shape[0])

  else:

  w,b=[var.value for var in computing_op.parameters[:2]]#has bias.

  if computing_op.type=='Conv':

  #calculate new weight and bias

  scale=alpha/torch.sqrt(var+epsilon)

  w=wscale.reshape([-1]+[1](w.ndim-1))

  b=alpha*(b-mean)/torch.sqrt(var+epsilon)+beta

  elif computing_op.type=='Gemm':

  #calculate new weight and bias

  scale=alpha/torch.sqrt(var+epsilon)

  if computing_op.attributes.get('transB',0):

  w=w*scale.reshape([-1,1])

  else:

  w=w*scale.reshape([1,-1])

  b=alpha*(b-mean)/torch.sqrt(var+epsilon)+beta

  elif computing_op.type=='ConvTranspose':

  scale=alpha/torch.sqrt(var+epsilon)

  group=computing_op.attributes.get('group',1)

  scale=scale.reshape([group,1,-1,1,1])

  w=w.reshape([group,-1,w.shape[1],w.shape[2],w.shape[3]])*scale

  w=w.reshape([w.shape[0]*w.shape[1],w.shape[2],w.shape[3],w.shape[4]])

  b=alpha*(b-mean)/torch.sqrt(var+epsilon)+beta

  else:

  raise TypeError(

  f'Unexpected op type{computing_op.type}.'

  f'Can not merge{computing_op.name}with{bn_op.name}')

  #create new op and variable

  merged_op=Operation(computing_op.name,op_type=computing_op.type,

  attributes=computing_op.attributes.copy())

  weight_var=Variable(computing_op.name+'_weight',w,True,[merged_op])

  bias_var=Variable(computing_op.name+'_bias',b,True,[merged_op])

  #replace&dirty work

  input_var=computing_op.inputs[0]

  output_var=bn_op.outputs[0]

  input_var.dest_ops.remove(computing_op)

  input_var.dest_ops.append(merged_op)

  output_var.source_op=merged_op

  #delete old operations

  computing_op.inputs.pop(0)

  bn_op.outputs.clear()

  self.graph.remove_operation(computing_op)

  #insert new

  self.graph.append_operation(merged_op)

  merged_op.inputs.extend([input_var,weight_var,bias_var])

  merged_op.outputs.extend([output_var])

  self.graph.append_variable(weight_var)

  self.graph.append_variable(bias_var)

相关文章
|
6月前
|
安全 Java 区块链
matic马蹄链合约DAPP项目系统开发技术(成熟语言)
Matic Network是一种基于侧链的公共区块链扩展解决方案。它的基础是Plasma框架的调整实施。Matic提供了可扩展性,同时以安全和分散的方式确保了卓越的用户体验。它在KovanTestnet上为Etalum提供了一个工作实现。Matic打算在未来支持其他区块链,这将使它能够提供互操作性功能,同时为现有的公共区块链提供可伸缩性。
|
存储 算法 区块链
DAPP智能合约系统软件开发案例 | 币安智能链模式系统开发
币安链和其它许多项目类似,比如EOS。它具有高吞吐量和高性能的底层匹配引擎,可以同时迅速的支持和处理大量交易。但是不够灵活性,无法支持许多复杂的DAPP。
|
存储 监控 算法
DAPP链上质押项目系统开发|DAPP合约模式开发案例
DAPP不依赖中心化机构也不受单一实体控制,因此DAPP可以减少中心化机构的意见干扰
|
区块链
马蹄链智能合约系统DAPP开发源码实例分析
马蹄链智能合约系统DAPP开发源码实例分析
|
算法 安全 Unix
DAPP马蹄链佛萨奇2.0智能合约系统开发(案例及详细)丨DAPP马蹄链佛萨奇2.0开发智能合约源码及方案
 Web3.0通过将信息交互从屏幕转移到物理空间,改变了终端用户体验,因而也有称Web3.0为“空间网络(Spatial Web)”。
|
前端开发 JavaScript Java
马蹄链DAPP合约项目系统开发技术方案丨(源码搭建)
马蹄链DAPP合约项目系统开发技术方案丨(源码搭建)
117 0
|
区块链 数据安全/隐私保护
马蹄链DAPP合约模式系统开发技术(原理)
马蹄链DAPP合约模式系统开发技术(原理)
101 0
DAPP马蹄链系统开发(方案详解)丨DAPP马蹄链系统开发(源码项目)
  大公排指的是全网排列,小公排指的是单体伞下排列,一条线公排指的是按一条线排列,跳排指的按指定某代数为推荐关系。
|
5G 区块链 调度
DAPP马蹄链Matic智能合约系统开发详细及分析丨Matic马蹄链智能合约开发案例源码版
   5G技术可以为智慧物流提供高速、低延迟的数据传输和通信服务,实现物流的实时监控和管理。例如,在物流配送中,使用5G技术可以实现对货物的实时跟踪和配送调度,提高物流效率和准确性。
DAPP马蹄链系统开发(方案及项目)丨DAPP马蹄链系统开发(源码详情)
  Metaverse is a virtual world constructed by humans using digital technology,mapped or transcended by the real world,and can interact with the real world.It is a digital living space with a new social system.