Bayesian Extraction of Jet Energy Loss Distributions in Heavy-Ion Collisions

简介: 笔记

1.问题

根据统计学模型来拟和[alpha,beta,gamma]这三个参数,在这里使用的是贝叶斯分析方法以及MCMC方法来作为模型。我们假设这三个参数的先验分布为均匀分布,介质能量损失为均匀分布。


2.工作流程


1数据准备

2模型构造

3拟和模型

4得出拟和参数

具体的过程在代码会非常清晰明了。

import numpy as np
import matplotlib.pyplot as plt
import pymc3 as pm
from pymc3 import Uniform, Normal
from scipy.special import gamma
from scipy.interpolate import interp1d
from scipy import optimize
def read_data():   //读取数据
    dat = np.loadtxt("./RAA_2760.txt")
    return dat[:,0],dat[:,1]
pp_x,pp_y = read_data()
gala30x = np.array([
0.047407180540804851462, 0.249923916753160223994,
0.614833454392768284613, 1.143195825666100798284,
1.836454554622572291486, 2.69652187455721519578,
3.72581450777950894932, 4.92729376584988240966,
6.304515590965074522826, 7.861693293370260468756,
9.603775985479262079849, 11.5365465979561397008,
13.6667446930642362949, 16.00222118898106625462,
18.55213484014315012403, 21.32720432178312892793,
24.34003576453269340098, 27.60555479678096102758,
31.14158670111123581829, 34.96965200824906954359,
39.11608494906788912192, 43.61365290848482780666,
48.50398616380420042732, 53.84138540650750561747,
59.69912185923549547712, 66.18061779443848965169,
73.44123859555988223951, 81.73681050672768572223,
91.5564665225368382555, 104.1575244310588945118
])
gala30w = np.array([
0.121677895367261782329, 0.283556882734938525493,
0.446432426678773424704, 0.610532130075812839931,
0.77630347862220587813, 0.944233288641719426021,
1.11484470167521153566, 1.288705432832675646608,
1.466439137624214862, 1.64873949785431911827,
1.83638767787041103674, 2.03027425167718247131,
2.23142718244322451682, 2.44104811309112149776,
2.66056024337508997273, 2.89167264137737623469,
3.13646833382438459, 3.3975275957906408941,
3.67810472956067256012, 3.9823886239009637486,
4.315899244899456113557, 4.686114009126298021,
5.1035026514834184093, 5.58333298168755125349,
6.1490444086574353235, 6.8391305457947582711,
7.7229478770061872416, 8.9437424683710389523,
10.87187293837791147282, 15.026111628122932986
])
class RAA2ELoss():    
    def __init__(self,RAA_fname,pp_pt=None,pp_spectra=None):
        self.with_ppdata = False
        if not(pp_pt is None or pp_spectra is None):
            self.pp_fit = interp1d(pp_pt,pp_spectra,fill_value="extrapolate")
            self.with_ppdata = True
        RAA_data = np.loadtxt(RAA_fname)
        self.RAA_x = RAA_data[:,0]
        self.RAA_xerr = RAA_data[:,1]
        self.RAA_y = RAA_data[:,2]
        self.RAA_yerr = RAA_data[:,3]
    def mean_ptloss(self, pt, b, c):
        return b*pt**c*np.log(pt)
    def model(self):
        with pm.Model() as basic_model:
            a = Uniform('a', lower=0, upper=10)
            b = Uniform('b', lower=0, upper=10)
            c = Uniform('c', lower=0, upper=1)
            intg_res = np.zeros_like(self.RAA_x)
            for i, x in enumerate(gala30x):
                scale_fct = self.RAA_x / gala30x[-1]
                x = x * scale_fct
                shifted_pt = self.RAA_x + x
                mean_dpt = self.mean_ptloss(shifted_pt, b, c)
                alpha = a
                beta = a / mean_dpt
                pdpt = (beta**(alpha) / (alpha)) * np.exp(-beta*x) * x**(alpha-1)
                intg_res += scale_fct*gala30w[i]*pdpt*self.pp_fit(shifted_pt)
               #上面两行代码不是很懂
            mu = intg_res / self.pp_fit(self.RAA_x)
            likelihood_y = Normal('likelihood_y', mu=mu, observed=self.RAA_y)
            start = pm.find_MAP(fmin=optimize.fmin_powell)
            step = pm.Metropolis()
            trace = pm.sample(1200,start=start,step=step)

ps: 代码和数据参考github,他上面的pymc的版本是1,如果使用pymc3的话将会报错。

该GitHub的流程图。省略了画图与保存数据的部分。

1.png

公众号:FPGA之旅

目录
相关文章
|
5月前
|
机器学习/深度学习 算法 关系型数据库
Hierarchical Attention-Based Age Estimation and Bias Analysis
【6月更文挑战第8天】Hierarchical Attention-Based Age Estimation论文提出了一种深度学习方法,利用层次注意力和图像增强来估计面部年龄。通过Transformer和CNN,它学习局部特征并进行序数分类和回归,提高在CACD和MORPH II数据集上的准确性。论文还包括对种族和性别偏倚的分析。方法包括自我注意的图像嵌入和层次概率年龄回归,优化多损失函数。实验表明,该方法在RS和SE协议下表现优越,且在消融研究中验证了增强聚合和编码器设计的有效性。
38 2
|
自然语言处理 算法
SIFRank New Baseline for Unsupervised Keyphrase Extraction Based on Pre-Trained Language Model
在社交媒体上,面临着大量的知识和信息,一个有效的关键词抽取算法可以广泛地被应用的信息检索和自然语言处理中。传统的关键词抽取算法很难使用外部的知识信息。
161 0
SIFRank New Baseline for Unsupervised Keyphrase Extraction Based on Pre-Trained Language Model
|
机器学习/深度学习 人工智能 自然语言处理
OneIE:A Joint Neural Model for Information Extraction with Global Features论文解读
大多数现有的用于信息抽取(IE)的联合神经网络模型使用局部任务特定的分类器来预测单个实例(例如,触发词,关系)的标签,而不管它们之间的交互。
186 0
|
机器学习/深度学习 自然语言处理 算法
TASLP21-Reinforcement Learning-based Dialogue Guided Event Extraction to Exploit Argument Relations
事件抽取是自然语言处理的一项基本任务。找到事件论元(如事件参与者)的角色对于事件抽取至关重要。
100 0
|
6月前
|
算法 计算机视觉
2017cvpr论文解读——Nasal Patches and Curves for Expression-Robust 3D Face Recognition
2017cvpr论文解读——Nasal Patches and Curves for Expression-Robust 3D Face Recognition
45 1
|
机器学习/深度学习 开发框架 数据建模
HiCLRE: A Hierarchical Contrastive Learning Framework for Distantly Supervised Relation Extraction
远程监督假设任何包含相同实体对的句子都反映了相同的关系。先前的远程监督关系抽取(DSRE)任务通常独立地关注sentence-level或bag-level去噪技术
176 0
|
监控
DFNet: Enhance Absolute Pose Regression withDirect Feature Matching
DFNet: Enhance Absolute Pose Regression withDirect Feature Matching
147 0
|
机器学习/深度学习 自然语言处理 数据挖掘
Re7:读论文 FLA/MLAC/FactLaw Learning to Predict Charges for Criminal Cases with Legal Basis
Re7:读论文 FLA/MLAC/FactLaw Learning to Predict Charges for Criminal Cases with Legal Basis
Re7:读论文 FLA/MLAC/FactLaw Learning to Predict Charges for Criminal Cases with Legal Basis
|
机器学习/深度学习
COVID-19 Cases Prediction (Regression)(一)
COVID-19 Cases Prediction (Regression)
521 0
COVID-19 Cases Prediction (Regression)(一)
|
机器学习/深度学习 异构计算
COVID-19 Cases Prediction (Regression)(二)
COVID-19 Cases Prediction (Regression)
459 0
COVID-19 Cases Prediction (Regression)(二)