[2018 徐州 网络赛|Hard to prepare ] 环形染色问题的公式解法

简介: InputOutput样例输入复制样例输出复制题目来源方法1:方法2:

题目来源

After Incident, a feast is usually held in Hakurei Shrine. This time Reimu asked Kokoro to deliver a Nogaku show during the feast. To enjoy the show, every audience has to wear a Nogaku mask , and seat around as a circle.


There are N guests Reimu serves. Kokoro has 2k

masks numbered from 0 , 1 , ⋯   , 0 , 1 , ⋯ , 2k − 1 , and every guest wears one of the masks. The masks have dark power of Dark Nogaku, and to prevent guests from being hurt by the power, two guests seating aside must ensure that if their masks are numbered ii and j , then i XNOR j must be positive. (two guests can wear the same mask). XNOR means   ( ij ) and every number has kk bits. (1 XNOR 1 = 1, 0 XNOR 0 = 1, 1 XNOR 0 = 0)


You may have seen 《A Summer Day’s dream》, a doujin Animation of Touhou Project. Things go like the anime, Suika activated her ability, and the feast will loop for infinite times. This really troubles Reimu: to not make her customers feel bored, she must prepare enough numbers of different Nogaku scenes. Reimu find that each time the same guest will seat on the same seat, and She just have to prepare a new scene for a specific mask distribution. Two distribution plans are considered different, if any guest wears different masks.


In order to save faiths for Shrine, Reimu have to calculate that to make guests not bored, how many different Nogaku scenes does Reimu and Kokoro have to prepare. Due to the number may be too large, Reimu only want to get the answer modules 1e9+7 . Reimu did never attend Terakoya, so she doesn’t know how to calculate in module. So Reimu wishes you to help her figure out the answer, and she promises that after you succeed she will give you a balloon as a gift.


Input


First line one number TT , the number of testcases; (T ≤ 20)

Next T lines each contains two numbers, N and k( 0 < N , k ≤1e6 )


Output


For each testcase output one line with a single number of scenes Reimu and Kokoro have to prepare, the answer modules 1e9+7 .


样例输入复制


2
3 1
4 2


样例输出复制


2
84


题目来源


ACM-ICPC 2018 徐州赛区网络预赛


方法1:


可参考

#define Clear(x,val) memset(x,val,sizeof x)
ll a[maxn];
int n,k;
void get() {
  a[0] = 1;
  for(int i=1; i<=1000000; i++) {
    a[i] = (a[i-1] * 2) % mod;
  }
}
ll dfs(ll now) {
  if(now == 1) return a[k] % mod;
  else if(now == 2) return (a[k] * (a[k] - 1)) % mod;
  else {
    ll cur = a[k] * qPow(a[k]-1,now-2) % mod;
    cur = cur * (a[k] - 2) % mod;
    return (cur + dfs(now - 2)) % mod;
  }
}
int main() {
//  cout << (2 ^ 1) << endl;
//  cout << (~(0^1)) << endl;
//  cout << (~(1^0)) << endl;
//  cout << (~(1^1)) << endl;
  get();
  int _ = read;
  while(_ --) {
    n = read,k = read;
    ll ans = dfs(n);
    cout << ans % mod << endl;
  }
  return 0;
}


方法2:


d37437395d784faea9be28696f786a72.png


图片来源


  int _ = read;
  while(_ --) {
    n = read,k = read;
    // ll ans = dfs(n);
    // cout << ans % mod << endl;
      ll tot = qPow(2,k);
        ll ans = qPow(tot-1,n);
        if(n&1) ans += 1;
        else ans += tot - 1;
        cout << (ans + mod) % mod << endl;
    }
目录
相关文章
|
1月前
|
存储 缓存 算法
|
5月前
|
机器学习/深度学习 算法
BP反向传播神经网络的公式推导
BP反向传播神经网络的公式推导
38 1
|
5月前
计算机网络——物理层-信道的极限容量(奈奎斯特公式、香农公式)
计算机网络——物理层-信道的极限容量(奈奎斯特公式、香农公式)
159 0
|
Web App开发 分布式计算 监控
多元融合:流媒体传输网络的全盘解法
7.28,LiveVideoStackCon阿里云视频云专场
272 0
|
机器学习/深度学习 算法 计算机视觉
CV学习笔记-BP神经网络训练实例(含详细计算过程与公式推导)
CV学习笔记-BP神经网络训练实例(含详细计算过程与公式推导)
134 0
CV学习笔记-BP神经网络训练实例(含详细计算过程与公式推导)
|
SQL
【计算机网络】数据链路层 : 停止-等待协议 ( 无差错情况 | 有差错情况 | 帧丢失 | 帧出错 | ACK 确认帧丢失 | ACK 确认帧延迟 | 信道利用率公式 | 信道利用率计算 )★(二)
【计算机网络】数据链路层 : 停止-等待协议 ( 无差错情况 | 有差错情况 | 帧丢失 | 帧出错 | ACK 确认帧丢失 | ACK 确认帧延迟 | 信道利用率公式 | 信道利用率计算 )★(二)
426 0
【计算机网络】数据链路层 : 停止-等待协议 ( 无差错情况 | 有差错情况 | 帧丢失 | 帧出错 | ACK 确认帧丢失 | ACK 确认帧延迟 | 信道利用率公式 | 信道利用率计算 )★(一)
【计算机网络】数据链路层 : 停止-等待协议 ( 无差错情况 | 有差错情况 | 帧丢失 | 帧出错 | ACK 确认帧丢失 | ACK 确认帧延迟 | 信道利用率公式 | 信道利用率计算 )★(一)
323 0
|
机器学习/深度学习
【计算机网络】物理层 : 香农定理 ( 噪声 | 信噪比 | 香农定理 | “香农定理“公式 | “香农定理“ 计算示例 | “奈氏准则“ 与 “香农定理“ 对比 与 计算示例)★
【计算机网络】物理层 : 香农定理 ( 噪声 | 信噪比 | 香农定理 | “香农定理“公式 | “香农定理“ 计算示例 | “奈氏准则“ 与 “香农定理“ 对比 与 计算示例)★
593 0
|
机器学习/深度学习
Ian Goodfellow:生成对抗网络 GAN 的公式是怎样推导出来的
昨天,谷歌大脑研究科学家、生成对抗网络GAN的提出者Ian Goodfellow在Twitter推荐了他最喜欢的两个机器学习的Theory Hacks,利用这两个技巧,他在著名的GAN论文中推导了公式。
1450 0
|
机器学习/深度学习 计算机视觉
全连接网络到卷积神经网络逐步推导(组图无公式)
在图像分析中,卷积神经网络(Convolutional Neural Networks, CNN)在时间和内存方面优于全连接网络(Full Connected, FC)。这是为什么呢?卷积神经网络优于全连接网络的优势是什么呢?卷积神经网络是如何从全连接网络中派生出来的呢?卷积神经网络这个术语又是从哪里而来?这些问题在本文中一一为大家解答。
6953 0
下一篇
无影云桌面