Codeforces 716A Crazy Computer

简介: A. Crazy Computer time limit per test:2 seconds memory limit per test:256 megabytes input:standard input output:standard...
A. Crazy Computer
time limit per test:2 seconds
memory limit per test:256 megabytes
input:standard input
output:standard output

ZS the Coder is coding on a crazy computer. If you don't type in a word for a c consecutive seconds, everything you typed disappear!

More formally, if you typed a word at second a and then the next word at second b, then if b - a ≤ c, just the new word is appended to other words on the screen. If b - a > c, then everything on the screen disappears and after that the word you have typed appears on the screen.

For example, if c = 5 and you typed words at seconds 1, 3, 8, 14, 19, 20 then at the second 8 there will be 3 words on the screen. After that, everything disappears at the second 13 because nothing was typed. At the seconds 14 and 19 another two words are typed, and finally, at the second 20, one more word is typed, and a total of 3 words remain on the screen.

You're given the times when ZS the Coder typed the words. Determine how many words remain on the screen after he finished typing everything.

Input

The first line contains two integers n and c (1 ≤ n ≤ 100 000, 1 ≤ c ≤ 109) — the number of words ZS the Coder typed and the crazy computer delay respectively.

The next line contains n integers t1, t2, ..., tn (1 ≤ t1 < t2 < ... < tn ≤ 109), where ti denotes the second when ZS the Coder typed the i-th word.

Output

Print a single positive integer, the number of words that remain on the screen after all n words was typed, in other words, at the second tn.

Examples
Input
6 5


1 3 8 14 19 20
Output
3
Input
6 1


1 3 5 7 9 10
Output
2
Note

The first sample is already explained in the problem statement.

For the second sample, after typing the first word at the second 1, it disappears because the next word is typed at the second 3 and 3 - 1 > 1. Similarly, only 1 word will remain at the second 9. Then, a word is typed at the second 10, so there will be two words on the screen, as the old word won't disappear because 10 - 9 ≤ 1.

 题目链接:http://codeforces.com/problemset/problem/716/A

题目大意:

  给你一个N(N<=100000)个字母敲击的时间a[i](a[i]<=109),如果在M时间内没有敲击那么屏幕就清零,否则屏幕上就多一个字母,问最后屏幕剩下几个字母。

    打下下一个字母的时候,如果和之前字母打下的时间不超过k的话,则保留前面的继续打,如果超过了,则前面的字母全部消失,只留下这一个字母。

     下面给出AC代码:

 

 1 #include <bits/stdc++.h>
 2 using namespace std;
 3 int main()
 4 {
 5     int a[100005];
 6     int n,t;
 7     while(scanf("%d%d",&n,&t)!=EOF)
 8     {
 9         for(int i=0;i<n;i++)
10             scanf("%d",&a[i]);
11             int ans=1;
12         for(int i=0;i<n-1;i++)
13         {
14             if(a[i+1]-a[i]<=t)
15             ans++;
16             else ans=1;
17         }
18         printf("%d\n",ans);
19     }
20     return 0;
21 }

 

 

 

目录
打赏
0
0
0
0
34
分享
相关文章
|
4月前
第四届光学与机器视觉国际学术会议(ICOMV 2025) 2025 4th International Conference on Optics and Machine Vision
第四届光学与机器视觉国际学术会议(ICOMV 2025) 2025 4th International Conference on Optics and Machine Vision
94 5
|
7月前
|
【博士每天一篇论文-理论分析】Dynamical systems, attractors, and neural circuits
本文是2016年Paul Miller在《F1000Research》上发表的论文,深入探讨了神经回路中的动力系统和吸引子,强调了使用基于动力系统的数学模型对神经回路进行准确建模的重要性,并分析了点吸引子、多稳态、记忆、抑制稳定网络等不同动力学系统在神经回路中的作用及对认知功能的影响。
39 7
【博士每天一篇论文-理论分析】Dynamical systems, attractors, and neural circuits
【博士每天一篇文献-综述】Communication dynamics in complex brain networks
本文综述了复杂脑网络中的通信动态,提出了一个将通信动态视为结构连接和功能连接之间必要联系的概念框架,探讨了结构网络的局部和全局拓扑属性如何支持网络通信模式,以及网络拓扑与动态模型之间的相互作用如何提供对大脑信息转换和处理机制的额外洞察。
69 2
【博士每天一篇文献-综述】Communication dynamics in complex brain networks
【博士每天一篇论文-综述】Brain Inspired Computing : A Systematic Survey and Future Trends
本文提供了对脑启发计算(BIC)领域的系统性综述,深入探讨了BIC的理论模型、硬件架构、软件工具、基准数据集,并分析了该领域在人工智能中的重要性、最新进展、主要挑战和未来发展趋势。
140 2
【博士每天一篇论文-综述】Brain Inspired Computing : A Systematic Survey and Future Trends
【博士每天一篇文献-模型】Deep learning incorporating biologically inspired neural dynamics and in memory
本文介绍了一种结合生物学启发的神经动力学和内存计算的深度学习方法,提出了脉冲神经单元(SNU),该单元融合了脉冲神经网络的时间动力学和人工神经网络的计算能力,通过实验分析显示,在手写数字识别任务中,基于SNU的网络达到了与循环神经网络相似或更高的准确性。
54 1
【博士每天一篇文献-模型】Deep learning incorporating biologically inspired neural dynamics and in memory
【博士每天一篇论文-算法】Continual Learning Through Synaptic Intelligence,SI算法
本文介绍了一种名为"Synaptic Intelligence"(SI)的持续学习方法,通过模拟生物神经网络的智能突触机制,解决了人工神经网络在学习新任务时的灾难性遗忘问题,并保持了计算效率。
221 1
【博士每天一篇论文-算法】Continual Learning Through Synaptic Intelligence,SI算法
【博士每天一篇文献-综述】Brain-inspired learning in artificial neural networks a review
这篇综述论文探讨了如何将生物学机制整合到人工神经网络中,以提升网络性能,并讨论了这些整合带来的潜在优势和挑战。
83 5
【博士每天一篇文献-综述】2024机器遗忘最新综述之一:A Survey on Machine Unlearning Techniques and New Emerged Privacy Risks
本文综述了机器遗忘技术及其面临的新兴隐私风险,提出了面向数据和模型的分类法,分析了信息窃取和模型破坏攻击手段,探讨了相应的防御策略,并讨论了机器遗忘技术在大型语言模型、联邦学习和异常检测等领域的应用。
124 5
【博士每天一篇文献-算法】Neurogenesis Dynamics-inspired Spiking Neural Network Training Acceleration
NDSNN(Neurogenesis Dynamics-inspired Spiking Neural Network)是一种受神经发生动态启发的脉冲神经网络训练加速框架,通过动态稀疏性训练和新的丢弃与生长策略,有效减少神经元连接数量,降低训练内存占用并提高效率,同时保持高准确性。
83 3
【博士每天一篇文献-算法】Progressive Neural Networks
本文介绍了渐进式网络(Progressive Neural Networks),一种深度强化学习架构,通过在训练过程中学习预训练模型间的侧向连接实现跨任务知识迁移,有效利用迁移学习优势同时避免灾难性遗忘,并通过强化学习任务验证了架构性能。
154 1