💥1 概述
本文介绍了云环境中的工作流调度,并研究了具有不同数量异构虚拟机(VM)的不同服务器。为此,该文提出一种结合蛾焰优化(MFO)和Salp Swarm算法(SSA)的多目标工作流调度方法,提出一种具有不同目标(制造时间、吞吐量、资源利用率和可靠性)的MFSSA方法。MFSSA算法的主要目标是根据目标函数的最小化找到最优的服务器和虚拟机,从而为每个工作流任务获得最佳虚拟机。
📚2 运行结果
部分代码:
%Define problem parameters search_agents_number=50; % Number of search agents maxLoop=100; % Maximum numbef of iterations %-------------------------------------------------------------------------% % MIPS (Millions of Instructions Per Second) % Expect Complete Time (ECT) % A cloud server(Node) can execute several separate VM samples, and each VM sample includes various resources such as CPU and memory % the CPU (cpu_i (t)) and memory (mem_i (t))) usage of the cloud server are between [0, 100], i.e. 0鈮pu_i (t), mem_i (t)鈮�100 % ECT (k,j) represents the expected time of execution of task T(k,1) on vm(:, j) Node1=randi([0 100],2,3);%vm1, vm2, vm3----> each vm incloud 2 feature(cpu capacity(mips), memory)...0鈮pu_i (t), mem_i (t)鈮�100 Node2=randi([0 100],2,4);%vm1, vm2, vm3, vm4----> each vm incloud 2 feature(cpu capacity(mips), memory)...0鈮pu_i (t), mem_i (t)鈮�100 Node3=randi([0 100],2,5);%vm1, vm2, vm3, vm4, vm5----> each vm incloud 2 feature(cpu capacity(mips), memory)...0鈮pu_i (t), mem_i (t)鈮�100 Node4=randi([0 100],2,6);%vm1, vm2, vm3, vm4, vm5, vm6, vm7----> each vm incloud 2 feature(cpu capacity(mips), memory)...0鈮pu_i (t), mem_i (t)鈮�100 Node5=randi([0 100],2,8); Node6=randi([0 100],2,10); Node7=randi([0 100],2,9); Node8=randi([0 100],2,12); %-------------------------------------------------------------------------% failure_Rate1 = [9e-5, 7e-5, 8e-5, 6e-5, 5e-5, 6e-5]; %failure_Rate of each processor(deadline failure rate of the tasks at time t) num_core = [15, 10, 15, 20, 25, 20]; %number of cores for jj=1:size(Node1, 2) s=randi(size(failure_Rate1, 2)); Node1(3, jj)=failure_Rate1(1, s); Node1(4, jj)=num_core(1, s);%number of cores Node1(5, jj)=num_core(1, s)*Node1(2, jj);% capacity of the VM instance end for jj=1:size(Node2, 2) s=randi(size(failure_Rate1, 2)); Node2(3, jj)=failure_Rate1(1, s); Node2(4, jj)=num_core(1, s); Node2(5, jj)=num_core(1, s)*Node2(2, jj);% capacity of the VM instance end for jj=1:size(Node3, 2) s=randi(size(failure_Rate1, 2)); Node3(3, jj)=failure_Rate1(1, s); Node3(4, jj)=num_core(1, s); Node3(5, jj)=num_core(1, s)*Node3(2, jj);% capacity of the VM instance end for jj=1:size(Node4, 2) s=randi(size(failure_Rate1, 2)); Node4(3, jj)=failure_Rate1(1, s); Node4(4, jj)=num_core(1, s); Node4(5, jj)=num_core(1, s)*Node4(2, jj);% capacity of the VM instance end for jj=1:size(Node5, 2) s=randi(size(failure_Rate1, 2)); Node5(3, jj)=failure_Rate1(1, s); Node5(4, jj)=num_core(1, s); Node5(5, jj)=num_core(1, s)*Node5(2, jj);% capacity of the VM instance end for jj=1:size(Node6, 2) s=randi(size(failure_Rate1, 2)); Node6(3, jj)=failure_Rate1(1, s); Node6(4, jj)=num_core(1, s); Node6(5, jj)=num_core(1, s)*Node6(2, jj);% capacity of the VM instance end for jj=1:size(Node7, 2) s=randi(size(failure_Rate1, 2)); Node7(3, jj)=failure_Rate1(1, s); Node7(4, jj)=num_core(1, s); Node7(5, jj)=num_core(1, s)*Node7(2, jj);% capacity of the VM instance end for jj=1:size(Node8, 2) s=randi(size(failure_Rate1, 2)); Node8(3, jj)=failure_Rate1(1, s); Node8(4, jj)=num_core(1, s); Node8(5, jj)=num_core(1, s)*Node8(2, jj);% capacity of the VM instance end % save vm_info.mat Node1 Node2 Node3 Node4 Node5 Node6 Node7 Node8; %=========================================================================% total_Node={Node1, Node2, Node3, Node4, Node5, Node6, Node7, Node8}; %=========================================================================% total_vm=zeros(size(Node1, 1), size(Node1, 2)+size(Node2, 2)+size(Node3, 2)+size(Node4, 2)+size(Node5, 2)+size(Node6, 2)+size(Node7, 2)+size(Node8, 2)); total_vm(:, 1:size(Node1, 2))=Node1(:, :); total_vm(:, size(Node1, 2)+1:size(Node1, 2)+size(Node2, 2))=Node2(:, :); total_vm(:, size(Node1, 2)+size(Node2, 2)+1:size(Node1, 2)+size(Node2, 2)+size(Node3, 2))=Node3(:, :); total_vm(:, size(Node1, 2)+size(Node2, 2)+size(Node3, 2)+1:size(Node1, 2)+size(Node2, 2)+size(Node3, 2)+size(Node4, 2))=Node4(:, :); total_vm(:, size(Node1, 2)+size(Node2, 2)+size(Node3, 2)+size(Node4, 2)+1:size(Node1, 2)+size(Node2, 2)+size(Node3, 2)+size(Node4, 2)+size(Node5, 2))=Node5(:, :); total_vm(:, size(Node1, 2)+size(Node2, 2)+size(Node3, 2)+size(Node4, 2)+size(Node5, 2)+1:size(Node1, 2)+size(Node2, 2)+size(Node3, 2)+size(Node4, 2)+size(Node5, 2)+size(Node6, 2))=Node6(:, :); total_vm(:, size(Node1, 2)+size(Node2, 2)+size(Node3, 2)+size(Node4, 2)+size(Node5, 2)+size(Node6, 2)+1:size(Node1, 2)+size(Node2, 2)+size(Node3, 2)+size(Node4, 2)+size(Node5, 2)+size(Node6, 2)+size(Node7, 2))=Node7(:, :); total_vm(:, size(Node1, 2)+size(Node2, 2)+size(Node3, 2)+size(Node4, 2)+size(Node5, 2)+size(Node6, 2)+size(Node7, 2)+1:size(Node1, 2)+size(Node2, 2)+size(Node3, 2)+size(Node4, 2)+size(Node5, 2)+size(Node6, 2)+size(Node7, 2)+size(Node8, 2))=Node8(:, :); %=========================================================================% % DAG job_uses=row(:, 22); %task size child_ref=text(:, 23); child_parent=text(:, 24); % child_parent1=child_parent; % child_ref1=child_ref; % index1 = find(contains(child_parent, 'ID00000')); % index2 = find(contains(child_ref, 'ID00000')); for jj=2:size(child_parent, 1) job_uses1(jj, 1)=cell2mat(job_uses(jj, 1)); end B1=unique(child_parent);% hazf anasor tekrari B2=unique(child_ref); ff=zeros(size(B1, 1), size(B1, 1)); for k1=2:size(B1, 1)-1 for k2=2:size(B2, 1)-1 aa = find(contains(child_parent, B1(k1))); for i3=1:size(aa, 1) m1=child_ref(aa(i3, 1), 1); bb = find(contains(B2, m1)); m2=child_parent(aa(i3, 1), 1); h1=strcmp(m1, m2); if h1==1 ff(k1-1, bb)=-1; else ff(k1-1, bb)=job_uses1(aa(i3, 1), 1); end end end end fff=ff; for i=1:size(fff, 1) for j=1:size(fff, 2) if fff(i, j)==0 fff(i, j)=-1; end end end fff=fff(1:size(fff, 1)-2, 2:size(fff, 2)-1); %=========================================================================% taskNum = size(B1, 1)-2; % % taskNum = size(fff, 1); % nvar = taskNum; % nvar=size(DataSet, 1); % number of tasks for scheduling. nvms=size(total_vm, 2); % number of virtual machine. dim=nvar; xmin=1; xmax=nvms; lowerBound=xmin; upperBound=xmax; %=========================================================================% % ECT=zeros(nvar,nvms); % for i1=1:nvar % for j1=1:nvms % ECT(i1, j1)=DataSet(i1,:)./total_vm(1, j1); % end % end %=========================================================================% DAG = FunctionClass; %produce the structure DAG.E=fff;%Communication cost between Ti and Tj/set of those task dependencies DAG.arrivalTime = 0; %=========================================================================% % for workflow with 6 tasks taskNum = 6; DAG.Wcet = randi([1 100],size(total_vm, 2), taskNum+1);%Computation cost of Ti on pj/the task execution time matrix DAG.E = [-1 20 24 -1 -1 -1;-1 -1 -1 12 -1 -1;-1 -1 -1 -1 43 -1;-1 -1 -1 -1 -1 70;-1 -1 -1 -1 -1 93;-1 -1 -1 -1 -1 -1]; %=========================================================================% energy_Spec =abs(rand(size(total_vm, 2), taskNum));%(NUM Of PROCESSOR)x(num task) %=========================================================================% %MFO_SSA %avg 30 runs % for i=1:30% 30 Runs [Resourse_utilization_MFSSA, Throughput_MFSSA, Reliability_MFSSA, Fitnes_value_MFSSA, MakeSpanMax_MFSSA, index_server_MFSSA, pos_best_MFSSA, Leader_pos_MFSSA] = forLoopFuc_MFSSA(taskNum, DAG, energy_Spec, search_agents_number, maxLoop, Node1, Node2, Node3, Node4, Node5, Node6, Node7, Node8, total_vm, nvar, dim, lowerBound,upperBound); % end %=========================================================================% fprintf('1-Resourse_utilization_MFSSA is %f\n', Resourse_utilization_MFSSA); fprintf('1-Throughput_MFSSA is %f\n', Throughput_MFSSA);% fprintf('1-Reliability_MFSSA is %f\n', Reliability_MFSSA);% fprintf('1-MakeSpanMax_MFSSA is %f\n', MakeSpanMax_MFSSA);% fprintf('1-Fitnes_value_MFSSA is %f\n', Fitnes_value_MFSSA); %========================================================================
🎉3 参考文献
部分理论来源于网络,如有侵权请联系删除。
[1]Taybeh Salehnia, Saeed Naderi, Seyedali Mirjalili, Mahmood Ahmadi (2023) A workflow scheduling in cloud environment using a combination of Moth-Flame and Salp Swarm algorithms