Bayesian Extraction of Jet Energy Loss Distributions in Heavy-Ion Collisions

简介: 笔记

1.问题

根据统计学模型来拟和[alpha,beta,gamma]这三个参数,在这里使用的是贝叶斯分析方法以及MCMC方法来作为模型。我们假设这三个参数的先验分布为均匀分布,介质能量损失为均匀分布。


2.工作流程


1数据准备

2模型构造

3拟和模型

4得出拟和参数

具体的过程在代码会非常清晰明了。

import numpy as np
import matplotlib.pyplot as plt
import pymc3 as pm
from pymc3 import Uniform, Normal
from scipy.special import gamma
from scipy.interpolate import interp1d
from scipy import optimize
def read_data():   //读取数据
    dat = np.loadtxt("./RAA_2760.txt")
    return dat[:,0],dat[:,1]
pp_x,pp_y = read_data()
gala30x = np.array([
0.047407180540804851462, 0.249923916753160223994,
0.614833454392768284613, 1.143195825666100798284,
1.836454554622572291486, 2.69652187455721519578,
3.72581450777950894932, 4.92729376584988240966,
6.304515590965074522826, 7.861693293370260468756,
9.603775985479262079849, 11.5365465979561397008,
13.6667446930642362949, 16.00222118898106625462,
18.55213484014315012403, 21.32720432178312892793,
24.34003576453269340098, 27.60555479678096102758,
31.14158670111123581829, 34.96965200824906954359,
39.11608494906788912192, 43.61365290848482780666,
48.50398616380420042732, 53.84138540650750561747,
59.69912185923549547712, 66.18061779443848965169,
73.44123859555988223951, 81.73681050672768572223,
91.5564665225368382555, 104.1575244310588945118
])
gala30w = np.array([
0.121677895367261782329, 0.283556882734938525493,
0.446432426678773424704, 0.610532130075812839931,
0.77630347862220587813, 0.944233288641719426021,
1.11484470167521153566, 1.288705432832675646608,
1.466439137624214862, 1.64873949785431911827,
1.83638767787041103674, 2.03027425167718247131,
2.23142718244322451682, 2.44104811309112149776,
2.66056024337508997273, 2.89167264137737623469,
3.13646833382438459, 3.3975275957906408941,
3.67810472956067256012, 3.9823886239009637486,
4.315899244899456113557, 4.686114009126298021,
5.1035026514834184093, 5.58333298168755125349,
6.1490444086574353235, 6.8391305457947582711,
7.7229478770061872416, 8.9437424683710389523,
10.87187293837791147282, 15.026111628122932986
])
class RAA2ELoss():    
    def __init__(self,RAA_fname,pp_pt=None,pp_spectra=None):
        self.with_ppdata = False
        if not(pp_pt is None or pp_spectra is None):
            self.pp_fit = interp1d(pp_pt,pp_spectra,fill_value="extrapolate")
            self.with_ppdata = True
        RAA_data = np.loadtxt(RAA_fname)
        self.RAA_x = RAA_data[:,0]
        self.RAA_xerr = RAA_data[:,1]
        self.RAA_y = RAA_data[:,2]
        self.RAA_yerr = RAA_data[:,3]
    def mean_ptloss(self, pt, b, c):
        return b*pt**c*np.log(pt)
    def model(self):
        with pm.Model() as basic_model:
            a = Uniform('a', lower=0, upper=10)
            b = Uniform('b', lower=0, upper=10)
            c = Uniform('c', lower=0, upper=1)
            intg_res = np.zeros_like(self.RAA_x)
            for i, x in enumerate(gala30x):
                scale_fct = self.RAA_x / gala30x[-1]
                x = x * scale_fct
                shifted_pt = self.RAA_x + x
                mean_dpt = self.mean_ptloss(shifted_pt, b, c)
                alpha = a
                beta = a / mean_dpt
                pdpt = (beta**(alpha) / (alpha)) * np.exp(-beta*x) * x**(alpha-1)
                intg_res += scale_fct*gala30w[i]*pdpt*self.pp_fit(shifted_pt)
               #上面两行代码不是很懂
            mu = intg_res / self.pp_fit(self.RAA_x)
            likelihood_y = Normal('likelihood_y', mu=mu, observed=self.RAA_y)
            start = pm.find_MAP(fmin=optimize.fmin_powell)
            step = pm.Metropolis()
            trace = pm.sample(1200,start=start,step=step)

ps: 代码和数据参考github,他上面的pymc的版本是1,如果使用pymc3的话将会报错。

该GitHub的流程图。省略了画图与保存数据的部分。

1.png

公众号:FPGA之旅

目录
打赏
0
0
0
0
4
分享
相关文章
Probability Distributions
Refer to R Tutorial andExercise Solution A probability distribution describes how the values of a random variable is distributed.
1431 0
(转) An overview of gradient descent optimization algorithms
  An overview of gradient descent optimization algorithms     Table of contents: Gradient descent variantsChallenges Batch gradient desc...
[Bhatia.Matrix Analysis.Solutions to Exercises and Problems]ExI.5.1
Show that the inner product $$\bex \sef{x_1\wedge \cdots \wedge x_k,y_1\wedge \cdots\wedge y_k} \eex$$ is equal to the determinant of the $k\times k$ matrix $\sex{\sef{x_i,y_j}}$.
637 0
[Bhatia.Matrix Analysis.Solutions to Exercises and Problems]ExI.5.10
Every $k\times k$ positive matrix $A=(a_{ij})$ can be realised as a Gram matrix, i.e., vectors $x_j$, $1\leq j\leq k$, can be found so that $a_{ij}=\sef{x_i,x_j}$ for all $i,j$.
663 0
[Bhatia.Matrix Analysis.Solutions to Exercises and Problems]ExI.5.9
(Schur's Theorem) If $A$ is positive, then $$\bex \per(A)\geq \det A. \eex$$   Solution. By Exercise I.
567 0
[Bhatia.Matrix Analysis.Solutions to Exercises and Problems]ExI.5.4
If $\dim \scrH=3$, then $\dim \otimes^3\scrH =27$, $\dim \wedge^3\scrH =1$ and $\dim \vee^3\scrH =10$.
718 0
[Bhatia.Matrix Analysis.Solutions to Exercises and Problems]ExI.5.5
Show that the inner product $$\bex \sef{x_1\vee \cdots \vee x_k,y_1\vee \cdots\vee y_k} \eex$$ is equal to the permanent of the $k\times k$ matrix $\sex{\sef{x_i,y_j}}$.
560 0
[Bhatia.Matrix Analysis.Solutions to Exercises and Problems]PrI.6.1
Given a basis $U=(u_1,\cdots,u_n)$ not necessarily orthonormal, in $\scrH$, how would you compute the biorthogonal basis $\sex{v_1,\cdots,v_n}$? Find ...
698 0
[Bhatia.Matrix Analysis.Solutions to Exercises and Problems]ExI.2.9
(1). When $A$ is normal, the set $W(A)$ is the convex hull of the eigenvalues of $A$. For nonnormal matrices, $W(A)$ may be bigger than the convex hull of its eigenvalues.
551 0
[Bhatia.Matrix Analysis.Solutions to Exercises and Problems]ExI.4.1
Let $x,y,z$ be linearly independent vectors in $\scrH$. Find a necessary and sufficient condition that a vector $w$ mush satisfy in order that the bil...
672 0