
进击的代码狗
由于编码不一致导致的 虽然大部分导出是没有问题的 但是数据表中存储包含一些脚本(富文本内容)会出现该问题,强制指定编码即可解决。 mysql导入时指定编码: mysql -u root -p --default-character-set=utf8 或者在导出时后显式指定编码就不存在这个问题了: mysqldump -uroot -p --default-character-set=utf8 mydb > E://xxxx.sql
GROUP_CONCAT(exp) 需与group by语句在一起使用 SELECT username,GROUP_CONCAT(address) FROM pinjie GROUP BY username; SELECT username,GROUP_CONCAT(address separator ",") FROM pinjie GROUP BY username; SELECT name ,GROUP_CONCAT(address order by address desc separator "#") address FROM t_aa GROUP BY name;
工程结构:在resource下新建开发,线上不同文件夹存放不同配置文件 pom.xml配置文件 <!-- maven配置不同环境打包 --> <build> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.7</source> <target>1.7</target> <encoding>UTF-8</encoding> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.3</version> <configuration> <warName>${project.artifactId}</warName> <webResources> <resource> <directory>src/main/resources/${package.environment}</directory> <targetPath>WEB-INF/classes</targetPath> <filtering>true</filtering> </resource> </webResources> </configuration> </plugin> </plugins> </build> <profiles> <profile> <id>dev</id> <properties> <package.environment>dev</package.environment> </properties> <activation> <!-- 默认使用开发环境 --> <activeByDefault>true</activeByDefault> </activation> </profile> <profile> <id>prod</id> <properties> <package.environment>prod</package.environment> </properties> </profile> </profiles> <!-- maven配置不同环境打包 --> 具体意思可学习maven相关知识 package -P prod 线上编译打包 package -P dev本地调试
<plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.7</source> <target>1.7</target> <encoding>UTF-8</encoding> </configuration> </plugin> </plugins> 如上配置依然解决不了则 吧原来报错的文件删除(先备份)重新复制一份即可解决
htmlunit 无头浏览器 爬虫使用填坑: <!-- htmlunit start --> <dependency> <groupId>org.jsoup</groupId> <artifactId>jsoup</artifactId> <version>1.10.3</version> </dependency> <dependency> <groupId>net.sourceforge.htmlunit</groupId> <artifactId>htmlunit</artifactId> <version>2.19</version> </dependency> <dependency> <groupId>xml-apis</groupId> <artifactId>xml-apis</artifactId> <version>1.4.01</version> </dependency> <dependency> <groupId>xerces</groupId> <artifactId>xercesImpl</artifactId> <version>2.11.0</version> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpcore</artifactId> <version>4.4.6</version> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId> <version>4.5.1</version> </dependency> <dependency> <groupId>commons-io</groupId> <artifactId>commons-io</artifactId> <version>2.4</version> </dependency><!-- htmlunit end --> 以上为htmlunit2.19版本所有依赖 ,不同版本存在较大差异
/** * 移除网页中注释掉的代码 * * @param str * @return */ public static String removedisablecode(String str) { Pattern pattern = Pattern.compile("<!--[\\w\\W\r\\n]*?-->"); Matcher matcher = pattern.matcher(str); str = matcher.replaceAll(""); return str; }
package com.jxd import org.apache.spark.SparkContextimport org.apache.spark.SparkConfimport java.sql.Connectionimport java.sql.DriverManagerobject hello { def main(args: Array[String]): Unit = { var conf = new SparkConf().setAppName("Hello World") var sc = new SparkContext(conf) var input = sc.textFile("test/hello", 2) var count = input.flatMap(name => name.split(" ")).map((_, 1)).reduceByKey(((a, b) => a + b)) count.foreachPartition(insertToMysql) } def insertToMysql(iterator: Iterator[(String, Int)]): Unit = { val driver = "com.mysql.jdbc.Driver" val url = "jdbc:mysql://192.168.10.58:3306/test" val username = "root" val password = "1" var connectionMqcrm: Connection = null Class.forName(driver) connectionMqcrm = DriverManager.getConnection(url, username, password) val sql = "INSERT INTO t_spark (`name`,`num`) VALUES (?,?)" iterator.foreach(data => { val statement = connectionMqcrm.prepareStatement(sql) statement.setString(1, data._1) statement.setInt(2, data._2) var result = statement.executeUpdate() if (result == 1) { println("写入mysql成功.............") } }) }} Caused by: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at com.jxd.hello$.insertToMysql(hello.scala:22) at com.jxd.hello$$anonfun$main$1.apply(hello.scala:13) at com.jxd.hello$$anonfun$main$1.apply(hello.scala:13) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926) at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2069) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2069) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) spark1.4以前版本 在/spark/jars 加入mysql驱动 并不起作用需在提交任务指定mysql驱动包 例如: spark-submit --master spark://192.168.10.160:7077 --driver-class-path /usr/spark/jars/mysql-connector-java-5.1.18-bin.jar --class com.jxd.hello /usr/spark/wc.jar /usr/spark/test/hello 高版本如 spark2.2已经修复此问题 注意:集群中每一个server都得加入mysql驱动包(建议先加一个 然后采用远程复制) 集群所有节点加完mysql驱动包后直接提交即可 park-submit --master spark://192.168.10.160:7077 --class com.jxd.hello /usr/spark/wc.jar /usr/spark/test/hello
http://blog.csdn.net/fx677588/article/details/58164902
1:namenode启动 datanode未启动 解决: /hadoop/tmp/dfs/name/current VERSION 查看截取id 与 data/current VERSION集群ID 保持一致 如果一致还不可以 删除所有主从节点的 current 目录 重新 format 生成
1.选取三台服务器(CentOS系统64位) 114.55.246.88 主节点 114.55.246.77 从节点 114.55.246.93 从节点 之后的操作如果是用普通用户操作的话也必须知道root用户的密码,因为有些操作是得用root用户操作。如果是用root用户操作的话就不存在以上问题。 我是用root用户操作的。 2.修改hosts文件 修改三台服务器的hosts文件。 vi /etc/hosts 在原文件的基础最后面加上: 114.55.246.88 Master 114.55.246.77 Slave1 114.55.246.93 Slave2 修改完成后保存执行如下命令。 source /etc/hosts 3.ssh无密码验证配置 3.1安装和启动ssh协议 我们需要两个服务:ssh和rsync。 可以通过下面命令查看是否已经安装: rpm -qa|grep openssh rpm -qa|grep rsync 如果没有安装ssh和rsync,可以通过下面命令进行安装: yum install ssh (安装ssh协议) yum install rsync (rsync是一个远程数据同步工具,可通过LAN/WAN快速同步多台主机间的文件) service sshd restart (启动服务) 3.2 配置Master无密码登录所有Salve 配置Master节点,以下是在Master节点的配置操作。 1)在Master节点上生成密码对,在Master节点上执行以下命令: ssh-keygen -t rsa -P '' 生成的密钥对:id_rsa和id_rsa.pub,默认存储在"/root/.ssh"目录下。 2)接着在Master节点上做如下配置,把id_rsa.pub追加到授权的key里面去。 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 3)修改ssh配置文件"/etc/ssh/sshd_config"的下列内容,将以下内容的注释去掉: RSAAuthentication yes # 启用 RSA 认证 PubkeyAuthentication yes # 启用公钥私钥配对认证方式 AuthorizedKeysFile .ssh/authorized_keys # 公钥文件路径(和上面生成的文件同) 4)重启ssh服务,才能使刚才设置有效。 service sshd restart 5)验证无密码登录本机是否成功。 ssh localhost 6)接下来的就是把公钥复制到所有的Slave机器上。使用下面的命令进行复制公钥: scp /root/.ssh/id_rsa.pub root@Slave1:/root/ scp /root/.ssh/id_rsa.pub root@Slave2:/root/ 接着配置Slave节点,以下是在Slave1节点的配置操作。 1)在"/root/"下创建".ssh"文件夹,如果已经存在就不需要创建了。 mkdir /root/.ssh 2)将Master的公钥追加到Slave1的授权文件"authorized_keys"中去。 cat /root/id_rsa.pub >> /root/.ssh/authorized_keys 3)修改"/etc/ssh/sshd_config",具体步骤参考前面Master设置的第3步和第4步。 4)用Master使用ssh无密码登录Slave1 ssh 114.55.246.77 5)把"/root/"目录下的"id_rsa.pub"文件删除掉。 rm –r /root/id_rsa.pub 重复上面的5个步骤把Slave2服务器进行相同的配置。 3.3 配置所有Slave无密码登录Master 以下是在Slave1节点的配置操作。 1)创建"Slave1"自己的公钥和私钥,并把自己的公钥追加到"authorized_keys"文件中,执行下面命令: ssh-keygen -t rsa -P '' cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys 2)将Slave1节点的公钥"id_rsa.pub"复制到Master节点的"/root/"目录下。 scp /root/.ssh/id_rsa.pub root@Master:/root/ 以下是在Master节点的配置操作。 1)将Slave1的公钥追加到Master的授权文件"authorized_keys"中去。 cat ~/id_rsa.pub >> ~/.ssh/authorized_keys 2)删除Slave1复制过来的"id_rsa.pub"文件。 rm –r /root/id_rsa.pub 配置完成后测试从Slave1到Master无密码登录。 ssh 114.55.246.88 按照上面的步骤把Slave2和Master之间建立起无密码登录。这样,Master能无密码验证登录每个Slave,每个Slave也能无密码验证登录到Master。 4.安装基础环境(JAVA和SCALA环境) 4.1 Java1.8环境搭建 1)下载jdk-8u121-linux-x64.tar.gz解压 tar -zxvf jdk-8u121-linux-x64.tar.gz 2)添加Java环境变量,在/etc/profile中添加: export JAVA_HOME=/usr/local/jdk1.8.0_121 PATH=$JAVA_HOME/bin:$PATH CLASSPATH=.:$JAVA_HOME/lib/rt.jar export JAVA_HOME PATH CLASSPATH 3)保存后刷新配置 source /etc/profile 4.2 Scala2.11.8环境搭建 1)下载scala安装包scala-2.11.8.rpm安装 rpm -ivh scala-2.11.8.rpm 2)添加Scala环境变量,在/etc/profile中添加: export SCALA_HOME=/usr/share/scala export PATH=$SCALA_HOME/bin:$PATH 3)保存后刷新配置 source /etc/profile 5.Hadoop2.7.3完全分布式搭建 以下是在Master节点操作: 1)下载二进制包hadoop-2.7.3.tar.gz 2)解压并移动到相应目录,我习惯将软件放到/opt目录下,命令如下: tar -zxvf hadoop-2.7.3.tar.gz mv hadoop-2.7.3 /opt 3)修改相应的配置文件。 修改/etc/profile,增加如下内容: export HADOOP_HOME=/opt/hadoop-2.7.3/ export PATH=$PATH:$HADOOP_HOME/bin export PATH=$PATH:$HADOOP_HOME/sbin export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_ROOT_LOGGER=INFO,console export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib" 修改完成后执行: source /etc/profile 修改$HADOOP_HOME/etc/hadoop/hadoop-env.sh,修改JAVA_HOME 如下: export JAVA_HOME=/usr/local/jdk1.8.0_121 修改$HADOOP_HOME/etc/hadoop/slaves,将原来的localhost删除,改成如下内容: Slave1 Slave2 修改$HADOOP_HOME/etc/hadoop/core-site.xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://Master:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop-2.7.3/tmp</value> </property> </configuration> 修改$HADOOP_HOME/etc/hadoop/hdfs-site.xml <configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>Master:50090</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/opt/hadoop-2.7.3/hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/opt/hadoop-2.7.3/hdfs/data</value> </property> </configuration> 复制template,生成xml,命令如下: cp mapred-site.xml.template mapred-site.xml 修改$HADOOP_HOME/etc/hadoop/mapred-site.xml <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>Master:10020</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>Master:19888</value> </property> </configuration> 修改$HADOOP_HOME/etc/hadoop/yarn-site.xml <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>Master:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>Master:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>Master:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>Master:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>Master:8088</value> </property> </configuration> 4)复制Master节点的hadoop文件夹到Slave1和Slave2上。 scp -r /opt/hadoop-2.7.3 root@Slave1:/opt scp -r /opt/hadoop-2.7.3 root@Slave2:/opt 5)在Slave1和Slave2上分别修改/etc/profile,过程同Master一样。 6)在Master节点启动集群,启动之前格式化一下namenode: hadoop namenode -format 启动: /opt/hadoop-2.7.3/sbin/start-all.sh 至此hadoop的完全分布式环境搭建完毕。 7)查看集群是否启动成功: jps Master显示: SecondaryNameNode ResourceManager NameNode Slave显示: NodeManager DataNode 6.Spark2.1.0完全分布式环境搭建 以下操作都在Master节点进行。 1)下载二进制包spark-2.1.0-bin-hadoop2.7.tgz 2)解压并移动到相应目录,命令如下: tar -zxvf spark-2.1.0-bin-hadoop2.7.tgz mv hadoop-2.7.3 /opt 3)修改相应的配置文件。 修改/etc/profie,增加如下内容: export SPARK_HOME=/opt/spark-2.1.0-bin-hadoop2.7/ export PATH=$PATH:$SPARK_HOME/bin 复制spark-env.sh.template成spark-env.sh cp spark-env.sh.template spark-env.sh 修改$SPARK_HOME/conf/spark-env.sh,添加如下内容: export JAVA_HOME=/usr/local/jdk1.8.0_121 export SCALA_HOME=/usr/share/scala export HADOOP_HOME=/opt/hadoop-2.7.3 export HADOOP_CONF_DIR=/opt/hadoop-2.7.3/etc/hadoop export SPARK_MASTER_IP=114.55.246.88 export SPARK_MASTER_HOST=114.55.246.88 export SPARK_LOCAL_IP=114.55.246.88 export SPARK_WORKER_MEMORY=1g export SPARK_WORKER_CORES=2 export SPARK_HOME=/opt/spark-2.1.0-bin-hadoop2.7 export SPARK_DIST_CLASSPATH=$(/opt/hadoop-2.7.3/bin/hadoop classpath) 复制slaves.template成slaves cp slaves.template slaves 修改$SPARK_HOME/conf/slaves,添加如下内容: Master Slave1 Slave2 4)将配置好的spark文件复制到Slave1和Slave2节点。 scp /opt/spark-2.1.0-bin-hadoop2.7 root@Slave1:/opt scp /opt/spark-2.1.0-bin-hadoop2.7 root@Slave2:/opt 5)修改Slave1和Slave2配置。 在Slave1和Slave2上分别修改/etc/profile,增加Spark的配置,过程同Master一样。 在Slave1和Slave2修改$SPARK_HOME/conf/spark-env.sh,将export SPARK_LOCAL_IP=114.55.246.88改成Slave1和Slave2对应节点的IP。 6)在Master节点启动集群。 /opt/spark-2.1.0-bin-hadoop2.7/sbin/start-all.sh 7)查看集群是否启动成功: jps Master在Hadoop的基础上新增了: Master 以上内容来自https://www.cnblogs.com/zengxiaoliang/p/6478859.html 亲测成功 ,在搭建的过程中部分操作不一致 不影响使用 Slave在Hadoop的基础上新增了: Worker http://masterid:50070 访问hadoop http://masterid:8080 访问spark 执行提交任务 spark-submit --master spark://192.168.10.160:7077 --class com.jxd.hello /usr/spark/wordcount.jar /usr/spark/test/hello(hdfs上存储的文件) hdfs dfs -cat /home/jinxudong/result/helloword/ 进入hadoop中查看执行计算结果
1. 到网站 http://idea.lanyus.com/ 获取注册码。 2.填入下面的license server: http://intellij.mandroid.cn/ http://idea.imsxm.com/ http://idea.iteblog.com/key.php 以上方法验证均可以
错误:/usr/local/bin/rar: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory 解决:是因为64位系统中安装了32位程序解决方法:yum install glibc.i686 重新安装以后还有如下类系错误 再继续安装包 error while loading shared libraries: libstdc++.so.6: cannot open shared object file: No such file or directory yum install libstdc++.so.6
加上版本UUID <script charset="utf-8" src="${basePath}js/souke/soukegalist.js?v=<%=UUID.randomUUID().toString()%>"></script>
<script src="${pageContext.request.contextPath}/assets/js/jquery-1.8.3.js"></script> <script type="text/javascript"> $(function() { $("#upload").click(function() { var formData = new FormData($("#uploadForm")[0]); $.ajax({ url : "${pageContext.request.contextPath}/uploadoldimage", type : 'POST', data : formData, async : false, cache : false, contentType : false, processData : false, success : function(data) { var obj = eval('(' + data + ')'); if (obj.status == 'success') { alert("上传完成"); $("#posterShow").attr("src",obj.imgurl); $("#posterShow").css("width","450px"); $("#posterShow").css("height","300px"); $("#showimg").val(obj.imgurl); } }, error : function() { alert("网络异常"); } }); }) })</script> 支持主流浏览器 非要用IE还是另寻他法吧
1.wget http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm下载对应当前系统版本的nginx包(package) 2.rpm -ivh nginx-release-centos-7-0.el7.ngx.noarch.rpm建立nginx的yum仓库 3.yum install nginx 下载并安装nginx systemctl start nginx 启动nginx服务 默认的配置文件在 /etc/nginx 路径下,使用该配置已经可以正确地运行nginx;如需要自定义,修改其下的 nginx.conf 等文件即可。 测试 在浏览器地址栏中输入部署nginx环境的机器的IP,如果一切正常,应该能看到如下字样的内容。
public static String httpPost(String url, JSONObject json) { String respContent = null; try{ HttpPost httpPost = new HttpPost(url); CloseableHttpClient client = HttpClients.createDefault(); // json方式 StringEntity entity = new StringEntity(json.toString(), "utf-8");// 解决中文乱码问题 entity.setContentEncoding("UTF-8"); entity.setContentType("application/json"); httpPost.setEntity(entity); System.out.println(); HttpResponse resp = client.execute(httpPost); if (resp.getStatusLine().getStatusCode() == 200) { HttpEntity he = resp.getEntity(); respContent = EntityUtils.toString(he, "UTF-8"); } }catch (Exception ex) { // TODO: handle exception respContent=null; }finally { return respContent; } } /** * @param url * 要请求的地址 * @return 状态码 * @throws IOException * @throws ClientProtocolException */ public static String httpGet(String url) { String urlNameString = url; String status = null; try { // 根据地址获取请求 HttpGet request = new HttpGet(urlNameString);// 这里发送get请求 request.getParams().setParameter(HttpMethodParams.SO_TIMEOUT,3000 ); // 获取当前客户端对象 HttpClient httpClient = new DefaultHttpClient(); // 通过请求对象获取响应对象 HttpResponse response; response = httpClient.execute(request); // 判断网络连接状态码是否正常(0--200都数正常) if (response.getStatusLine().getStatusCode() == 200) { status = "200"; } } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); log.error(e); } finally { return status; } }
public int insertChanDaoTaskModel(List<T> t) { // TODO Auto-generated method stub Session session = this.hibernateTemplate.getSessionFactory().openSession(); Transaction tran = session.beginTransaction(); tran.begin(); try { for(int i=0;i<t.size();i++){session.save(t.get(i)); if(i%2000==0){ session.flush(); session.clear(); } } tran.commit(); session.close(); return 1; } catch (Exception ex) { // TODO: handle exception return 0; } }
1:获取参数之前的全路径: function common() { var location = (window.location + '').split('/'); var basePath = location[0] + '//' + location[2] + '/' + location[3] + '/'; return basePath; } 例如:http://www.nb.tt/a/b/c.html?----- 2:js快速导出项目中已有的文件 $("#export").click(function() { window.open("../template/urls.xlsx"); })
must: AND must_not:NOT should:OR
https://stackoverflow.com/questions/16063105/org-tmatesoft-sqljet-core-sqljetexception-busy-error-code-is-busy Exception in thread "main" org.tmatesoft.svn.core.SVNException: svn: E200030: SQLite error at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:85) at org.tmatesoft.svn.core.internal.wc.SVNErrorManager.error(SVNErrorManager.java:69) at org.tmatesoft.svn.core.internal.wc17.db.SVNWCDbRoot.<init>(SVNWCDbRoot.java:83) at org.tmatesoft.svn.core.internal.wc17.db.SVNWCDb.parseDir(SVNWCDb.java:1527) at org.tmatesoft.svn.core.internal.wc17.db.SVNWCDb.parseDir(SVNWCDb.java:1390) at org.tmatesoft.svn.core.internal.wc17.db.SVNWCDb.getFormatTemp(SVNWCDb.java:1223) at org.tmatesoft.svn.core.internal.wc17.SVNWCContext.checkWC(SVNWCContext.java:4247) at org.tmatesoft.svn.core.internal.wc17.SVNWCContext.checkWC(SVNWCContext.java:4241) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgAbstractUpdate.checkout(SvnNgAbstractUpdate.java:735) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:14) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgCheckout.run(SvnNgCheckout.java:9) at org.tmatesoft.svn.core.internal.wc2.ng.SvnNgOperationRunner.run(SvnNgOperationRunner.java:20) at org.tmatesoft.svn.core.internal.wc2.SvnOperationRunner.run(SvnOperationRunner.java:20) at org.tmatesoft.svn.core.wc2.SvnOperationFactory.run(SvnOperationFactory.java:1238) at org.tmatesoft.svn.core.wc2.SvnOperation.run(SvnOperation.java:294) at org.tmatesoft.svn.core.wc.SVNUpdateClient.doCheckout(SVNUpdateClient.java:777) at SvnTest.test(SvnTest.java:113) at SvnTest.main(SvnTest.java:122) Caused by: org.tmatesoft.sqljet.core.SqlJetException: BUSY: error code is BUSY at org.tmatesoft.svn.core.internal.wc17.db.SVNWCDbRoot.<init>(SVNWCDbRoot.java:82) ... 15 more解决:删除原有检出的svn地址重新指定
每一个独立的调取任务 需起不同的名字,否则只有最后一个调度起作用其他不起作用
http://blog.csdn.net/u010039979/article/details/53378079
在mysql中执行以下命令: drop database hive; create database hive; alter database hive character set latin1; 重启hive
安装mariadb MariaDB数据库管理系统是MySQL的一个分支,主要由开源社区在维护,采用GPL授权许可。开发这个分支的原因之一是:甲骨文公司收购了MySQL后,有将MySQL闭源的潜在风险,因此社区采用分支的方式来避开这个风险。MariaDB的目的是完全兼容MySQL,包括API和命令行,使之能轻松成为MySQL的代替品。 安装mariadb,大小59 M。 [root@yl-web yl]# yum install mariadb-server mariadb mariadb数据库的相关命令是: systemctl start mariadb #启动MariaDB systemctl stop mariadb #停止MariaDB systemctl restart mariadb #重启MariaDB systemctl enable mariadb #设置开机启动 所以先启动数据库 [root@yl-web yl]# systemctl start mariadb 然后就可以正常使用mysql了 [root@yl-web yl]# mysql -u root -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 3 Server version: 5.5.41-MariaDB MariaDB Server Copyright (c) 2000, 2014, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | test | +--------------------+ 4 rows in set (0.00 sec) MariaDB [(none)]> 安装mariadb后显示的也是 MariaDB [(none)]> 卸载mysql: [root@localhost usr]# yum remove mysql mysql-server mysql-libs compat-mysql51 [root@localhost usr]# rm -rf /var/lib/mysql [root@localhost usr]# rm /etc/my.cnf 方式2(mysql ): 官网下载安装mysql-server # wget http://dev.mysql.com/get/mysql-community-release-el7-5.noarch.rpm # rpm -ivh mysql-community-release-el7-5.noarch.rpm # yum install mysql-community-server默认安装完没有密码use mysql set password for 'root'@'localhost' =password('password');设置mysql登录密码
1:关闭防火墙 systemctl stop iptables.service 2:禁止开启启动 systemctl disable firewalld.service 3:查看防火墙 firewall-cmd --state
// 解决跨越请求的问题 response.setHeader("Access-Control-Allow-Origin", "*");
<!DOCTYPE html><html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css">body, html,#allmap {width: 100%;height: 100%;overflow: hidden;margin:0;}#l-map{height:100%;width:78%;float:left;border-right:2px solid #bcbcbc;}#r-result{height:100%;width:20%;float:left;}</style><script type="text/javascript" src="http://api.map.baidu.com/api?v=1.5&ak="></script><script type="text/javascript" src="http://developer.baidu.com/map/jsdemo/demo/convertor.js"></script><script src="http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js" type="text/javascript"></script><title>GPS转百度</title><script> var point = new BMap.Point(116.331398,39.897445); var xx = ""; var yy = ""; //判断手机浏览器是否支持定位 if(navigator.geolocation){ var geolocation = new BMap.Geolocation();//创建定位实例 geolocation.getCurrentPosition(showLocation,{enableHighAccuracy: true});//enableHighAccuracy 要求浏览器获取最佳结果 }else{ map.addControl(new BMap.GeolocationControl());//添加定位控件 支持定位 } //处理定位后的信息 function showLocation(r){ if(this.getStatus() == BMAP_STATUS_SUCCESS){//定位成功 //新建中心点 并将地图中心移动过去 alert("百度:"+r.longitude+","+r.latitude); } else { alert('failed'+this.getStatus());//定位失败 } }</script></head><body><div id="allmap"></div>百度jsapi</body></html>
<!DOCTYPE html><html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css">body, html,#allmap {width: 100%;height: 100%;overflow: hidden;margin:0;}#l-map{height:100%;width:78%;float:left;border-right:2px solid #bcbcbc;}#r-result{height:100%;width:20%;float:left;}</style><script type="text/javascript" src="http://api.map.baidu.com/api?v=1.5&ak="></script><script type="text/javascript" src="http://developer.baidu.com/map/jsdemo/demo/convertor.js"></script><script src="http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js" type="text/javascript"></script><title>GPS转百度</title><script> var point = new BMap.Point(116.331398,39.897445); //判断手机浏览器是否支持定位 if(navigator.geolocation){ var geolocation = new BMap.Geolocation();//创建定位实例 geolocation.getCurrentPosition(showLocation,{enableHighAccuracy: true});//enableHighAccuracy 要求浏览器获取最佳结果 }else{ map.addControl(new BMap.GeolocationControl());//添加定位控件 支持定位 } //处理定位后的信息 function showLocation(r){ if(this.getStatus() == BMAP_STATUS_SUCCESS){//定位成功 //新建中心点 并将地图中心移动过去 alert("百度:"+r.longitude+","+r.latitude); } else { alert('failed'+this.getStatus());//定位失败 } }</script></head><body><div id="allmap"></div>百度jsapi</body></html>
Ehcache系列二:Spring缓存注解@Cache使用 标签: CacheableCacheEvictCachePut 2016-06-06 16:37 2235人阅读 评论(0) 收藏 举报 分类: java(264) spring(59) 缓存(5) 目录(?)[+] 参考资料 http://www.ibm.com/developerworks/cn/opensource/os-cn-spring-cache/ http://swiftlet.net/archives/774 缓存注解有以下三个: @Cacheable @CacheEvict @CachePut @Cacheable(value=”accountCache”),这个注释的意思是,当调用这个方法的时候,会从一个名叫 accountCache 的缓存中查询,如果没有,则执行实际的方法(即查询数据库),并将执行的结果存入缓存中,否则返回缓存中的对象。这里的缓存中的 key 就是参数 userName,value 就是 Account 对象。“accountCache”缓存是在 spring*.xml 中定义的名称。 示例: Java代码 @Cacheable(value="accountCache")// 使用了一个缓存名叫 accountCache public Account getAccountByName(String userName) { // 方法内部实现不考虑缓存逻辑,直接实现业务 System.out.println("real query account."+userName); return getFromDB(userName); } @CacheEvict 注释来标记要清空缓存的方法,当这个方法被调用后,即会清空缓存。注意其中一个 @CacheEvict(value=”accountCache”,key=”#account.getName()”),其中的 Key 是用来指定缓存的 key 的,这里因为我们保存的时候用的是 account 对象的 name 字段,所以这里还需要从参数 account 对象中获取 name 的值来作为 key,前面的 # 号代表这是一个 SpEL 表达式,此表达式可以遍历方法的参数对象,具体语法可以参考 Spring 的相关文档手册。 示例: Java代码 @CacheEvict(value="accountCache",key="#account.getName()")// 清空accountCache 缓存 public void updateAccount(Account account) { updateDB(account); } @CacheEvict(value="accountCache",allEntries=true)// 清空accountCache 缓存 public void reload() { reloadAll() } @Cacheable(value="accountCache",condition="#userName.length() <=4")// 缓存名叫 accountCache public Account getAccountByName(String userName) { // 方法内部实现不考虑缓存逻辑,直接实现业务 return getFromDB(userName); } @CachePut 注释,这个注释可以确保方法被执行,同时方法的返回值也被记录到缓存中,实现缓存与数据库的同步更新。 示例: Java代码 @CachePut(value="accountCache",key="#account.getName()")// 更新accountCache 缓存 public Account updateAccount(Account account) { return updateDB(account); } @Cacheable、@CachePut、@CacheEvict 注释介绍 通过上面的例子,我们可以看到 spring cache 主要使用两个注释标签,即 @Cacheable、@CachePut 和 @CacheEvict,我们总结一下其作用和配置方法。 表 1. @Cacheable 作用和配置方法 @Cacheable 的作用 主要针对方法配置,能够根据方法的请求参数对其结果进行缓存 @Cacheable 主要的参数 value 缓存的名称,在 spring 配置文件中定义,必须指定至少一个 例如:@Cacheable(value=”mycache”) 或者 @Cacheable(value={”cache1”,”cache2”} key 缓存的 key,可以为空,如果指定要按照 SpEL 表达式编写,如果不指定,则缺省按照方法的所有参数进行组合 例如:@Cacheable(value=”testcache”,key=”#userName”) condition 缓存的条件,可以为空,使用 SpEL 编写,返回 true 或者 false,只有为 true 才进行缓存 例如:@Cacheable(value=”testcache”,condition=”#userName.length()>2”) 表 2. @CachePut 作用和配置方法 @CachePut 的作用 主要针对方法配置,能够根据方法的请求参数对其结果进行缓存,和 @Cacheable 不同的是,它每次都会触发真实方法的调用 @CachePut 主要的参数 value 缓存的名称,在 spring 配置文件中定义,必须指定至少一个 例如:@Cacheable(value=”mycache”) 或者 @Cacheable(value={”cache1”,”cache2”} key 缓存的 key,可以为空,如果指定要按照 SpEL 表达式编写,如果不指定,则缺省按照方法的所有参数进行组合 例如:@Cacheable(value=”testcache”,key=”#userName”) condition 缓存的条件,可以为空,使用 SpEL 编写,返回 true 或者 false,只有为 true 才进行缓存 例如:@Cacheable(value=”testcache”,condition=”#userName.length()>2”) 表 3. @CacheEvict 作用和配置方法 @CachEvict 的作用 主要针对方法配置,能够根据一定的条件对缓存进行清空 @CacheEvict 主要的参数 value 缓存的名称,在 spring 配置文件中定义,必须指定至少一个 例如:@CachEvict(value=”mycache”) 或者 @CachEvict(value={”cache1”,”cache2”} key 缓存的 key,可以为空,如果指定要按照 SpEL 表达式编写,如果不指定,则缺省按照方法的所有参数进行组合 例如:@CachEvict(value=”testcache”,key=”#userName”) condition 缓存的条件,可以为空,使用 SpEL 编写,返回 true 或者 false,只有为 true 才清空缓存 例如:@CachEvict(value=”testcache”,condition=”#userName.length()>2”) allEntries 是否清空所有缓存内容,缺省为 false,如果指定为 true,则方法调用后将立即清空所有缓存 例如:@CachEvict(value=”testcache”,allEntries=true) beforeInvocation 是否在方法执行前就清空,缺省为 false,如果指定为 true,则在方法还没有执行的时候就清空缓存,缺省情况下,如果方法执行抛出异常,则不会清空缓存 例如:@CachEvict(value=”testcache”,beforeInvocation=true) 转载出处:http://tom-seed.iteye.com/blog/2104430
一、打开tomcat安装目录下conf/server.xml这个文件在server.xml文档中找到 </Engine></Service> 接着添加上面添加以下内容(暂时先说分为三种方式):第一种:<Host name="www.haokan946.cn" debug="0" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <Context path="" docBase="www" debug="0" reloadable="true" crossContext="true"/> <Logger className="org.apache.catalina.logger.FileLogger" directory="logs" prefix="www_5sai_log." suffix=".txt" timestamp="true"/> </Host> 以上的内容就是在Tomcat的webapps目录下新建一个www文件夹作为www.haokan946.cn这个域名的主目录 第二种:<Host name="test.5sai.net.cn" debug="0" appBase="C://test/www" unpackWARs="true" autoDeploy="true"><Context path="" docBase="."/> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="test_5sai_access_log." suffix=".txt" pattern="common" resolveHosts="false"/><Logger className="org.apache.catalina.logger.FileLogger" directory="logs" prefix="test_5sai_log." suffix=".txt" timestamp="true"/></Host> 以上内容的意思就是在C盘test文件夹下面建立一个www文件夹作为test.5sai.net.cn这个域名的主目录 第三种,如果是linux系统的/var/www作为test.5sai.net.cn这个域名的主目录的话内容如下: <Host name="test.5sai.net.cn" debug="0" appBase="/var/www" unpackWARs="true" autoDeploy="true"><Context path="" docBase="."/> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="test_5sai_access_log." suffix=".txt" pattern="common" resolveHosts="false"/><Logger className="org.apache.catalina.logger.FileLogger" directory="logs" prefix="test_5sai_log." suffix=".txt" timestamp="true"/></Host> 再补充一种<Host name="www.haokan946.cn" debug="0" unpackWARs="true"> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="/var/log/tomcat" prefix="www_5sai_access_log." suffix=".txt" pattern="common"/> <Logger className="org.apache.catalina.logger.FileLogger" directory="/var/log/tomcat" prefix="www_5sai_log." suffix=".txt" timestamp="true"/> <Context path="" docBase="/var/www" debug="0" reloadable="true"/> </Host> 域名www.haokan946.cn的主目录为/var/www 该站点的所有访问LOG放在/ar/log/tomcat里面 测试:windows用户打开C:/WINDOWS/system32/drivers/etc的hosts这个文件,linux用户打开/ect的hosts文件或者vi /ect/hosts修改内容如下127.0.0.1 localhost127.0.0.1 www.haokan946.cn127.0.0.1 test.5sai.net.cn 然后打开浏览器直接输入网址测试!
toolbox : { show : true, feature : { dataView : { optionToContent : function(option) { // 行名称 var axisData = option.xAxis[0].data; // 列名称 var header = option.legend[0].data; var seriesarr = option.series; var eldiv = '<div id="viewdata" style="width:100%;display:block;margin-left:85px;overflow:auto;">'; var firsttd = '<td></td>'; var table = '<table style="width:100%;text-align:left;overflow:scroll;"><tbody>' + '<tr>' if (header != undefined) { for (var i = 0; i < header.length; i++) { firsttd += '<td>' + header[i] + '</td>' } } else { table = '<table style="width:50%;text-align:left;overflow:scroll;"><tbody>'; for (var i = 0; i < 1; i++) { firsttd += '<td>' + "分数" + '</td>' } } // 表格列头拼装完毕 table += firsttd + '</tr>'; // 表格行头拼装完毕 for (var i = 0, l = axisData.length; i < l; i++) { table += '<tr>' + '<td>' + axisData[i] + '</td>' for (var j = 0; j < seriesarr.length; j++) { table += '<td>' + seriesarr[j].data[i] + '</td>' } } table += "</tr>"; table += '</tbody></table>'; return eldiv + table + "</div>"; } }, magicType : { show : true, type : [ 'line', 'bar' ] }, restore : { show : true }, saveAsImage : { show : true } } },
1:Echarts2.0 这种皮肤下柱状图转折线图默认是平滑曲线 默认皮肤为硬折线: 如果需要在2.0的macarons皮肤下使用硬折线需显式设置以下属性: series: [ { name:'最高气温', type:'line', smooth:false, //false为硬折线 true为平滑折线 data:[11, 11, 15, 13, 12, 13, 10], markPoint: { data: [ {type: 'max', name: '最大值'}, {type: 'min', name: '最小值'} ] }, markLine: { data: [ {type: 'average', name: '平均值'} ] } } ]
选择项目 自由风格
yum install gcc glibc-devel make ncurses-devel openssl-devel xmlto 1.Erlang安装配置 下载安装包,地址http://www.erlang.org/downloads,我选择的是otp_src_18.3.tar.gz。 然后解压文件: [root@iZ25e3bt9a6Z rabbitmq]# tar -xzvf otp_src_18.3.tar.gz[root@iZ25e3bt9a6Z rabbitmq]# cd otp_src_18.3/ 配置安装路径编译代码: [root@iZ25e3bt9a6Z otp_src_18.3]# ./configure --prefix=/opt/erlang 执行编译结果: [root@iZ25e3bt9a6Z otp_src_18.3]# make && make install #install 过程中会有某些文件没权限实际上是没有执行权限 chomd +x a :给a加执行权限 然后在配置Erlang环境变量,vi /etc/profile文件,增加下面的环境变量: ERLANG_HOME=/usr/local/erlang PATH=$ERLANG_HOME/bin:$PATH export ERLANG_HOME export PATH source /etc/profile source /etc/profile使得文件生效 在安装Erlang过程中,可能会遇到以下问题,一般都是因为系统中缺少相应的包引起的,缺少什么包直接yum安装即可。 /usr/rabbitmq/rabbitmq/sbin 输入命令erl检验是否安装成功 出现: Erlang/OTP 18 [erts-7.3] [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false] Eshell V7.3 (abort with ^G) erlang则安装成功 2.下载安装RabbitMq [root@iZ25e3bt9a6Z rabbitmq]# wget http://www.rabbitmq.com/releases/rabbitmq-server/v3.6.1/rabbitmq-server-generic-unix-3.6.1.tar.xz 解压文件 [root@iZ25e3bt9a6Z rabbitmq]# xz -d rabbitmq-server-generic-unix-3.6.1.tar.xz[root@iZ25e3bt9a6Z rabbitmq]# tar -xvf rabbitmq-server-generic-unix-3.6.1.tar -C /opt 解压后进入文件夹/opt发现多了个文件夹rabbitmq-server-generic-unix-3.6.1 ,重命名为rabbitmq以便记忆。 然后在配置rabbitmq环境变量,vi /etc/profile文件,增加下面的环境变量: #set rabbitmq environmentexport PATH=$PATH:/usr/rabbitmq/rabbitmq/sbin source /etc/profile使得文件生效 3.RabbitMQ服务启动关闭 以上就已经完成了RabbitMq的安装,怎么启动服务呢? 启动服务: [root@iZ25e3bt9a6Z rabbitmq]# cd sbin/[root@iZ25e3bt9a6Z sbin]# ./rabbitmq-server -detached (启动消息服务) [root@iZ25e3bt9a6Z sbin]# ./rabbitmqctl status 查看服务状态:Status of node rabbit@iZ25e3bt9a6Z ...[{pid,11849}, {running_applications, [{rabbitmq_management,"RabbitMQ Management Console","3.6.1"}, {rabbitmq_management_agent,"RabbitMQ Management Agent","3.6.1"}, {rabbitmq_web_dispatch,"RabbitMQ Web Dispatcher","3.6.1"}, {webmachine,"webmachine","1.10.3"}, {amqp_client,"RabbitMQ AMQP Client","3.6.1"}, {mochiweb,"MochiMedia Web Server","2.13.0"}, {syntax_tools,"Syntax tools","1.7"}, {ssl,"Erlang/OTP SSL application","7.3"}, {public_key,"Public key infrastructure","1.1.1"}, {asn1,"The Erlang ASN1 compiler version 4.0.2","4.0.2"}, {crypto,"CRYPTO","3.6.3"}, {compiler,"ERTS CXC 138 10","6.0.3"}, {inets,"INETS CXC 138 49","6.2"}, {rabbit,"RabbitMQ","3.6.1"}, {mnesia,"MNESIA CXC 138 12","4.13.3"}, {rabbit_common,[],"3.6.1"}, {xmerl,"XML parser","1.3.10"}, {os_mon,"CPO CXC 138 46","2.4"}, {ranch,"Socket acceptor pool for TCP protocols.","1.2.1"}, {sasl,"SASL CXC 138 11","2.7"}, {stdlib,"ERTS CXC 138 10","2.8"}, {kernel,"ERTS CXC 138 10","4.2"}]}, {os,{unix,linux}}, {erlang_version, "Erlang/OTP 18 [erts-7.3] [source] [64-bit] [smp:8:8] [async-threads:64] [hipe] [kernel-poll:true]\n"}, {memory, [{total,64111264}, {connection_readers,0}, {connection_writers,0}, {connection_channels,0}, {connection_other,2808}, {queue_procs,2808}, {queue_slave_procs,0}, {plugins,367288}, {other_proc,19041296}, {mnesia,61720}, {mgmt_db,158696}, {msg_index,47120}, {other_ets,1372440}, {binary,128216}, {code,27368230}, {atom,992409}, {other_system,14568233}]}, {alarms,[]}, {listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]}, {vm_memory_high_watermark,0.4}, {vm_memory_limit,6556241100}, {disk_free_limit,50000000}, {disk_free,37431123968}, {file_descriptors, [{total_limit,65435}, {total_used,2}, {sockets_limit,58889}, {sockets_used,0}]}, {processes,[{limit,1048576},{used,204}]}, {run_queue,0}, {uptime,412681}, {kernel,{net_ticktime,60}}] 关闭服务: [root@iZ25e3bt9a6Z sbin]# ./rabbitmqctl stop 关闭服务Stopping and halting node rabbit@iZ25e3bt9a6Z ... 4. 配置网页插件 首先创建目录,否则可能报错: mkdir /etc/rabbitmq 然后启用插件: ./rabbitmq-plugins enable rabbitmq_management 配置linux 端口 15672 网页管理 5672 AMQP端口 然后访问http://localhost:15672即可 默认用户guest 密码guest 5. 远程访问配置 默认网页是不允许访问的,需要增加一个用户修改一下权限,代码如下: 添加用户:rabbitmqctl add_user jxd jxd 添加权限:rabbitmqctl set_permissions -p "/" jxd ".*" ".*" ".*" 修改用户角色rabbitmqctl set_user_tags jxd administrator 然后就可以远程访问了,然后可直接配置用户权限等信息。(关闭防火墙) 6. rabbitmq常用命令 add_user <UserName> <Password> delete_user <UserName> change_password <UserName> <NewPassword> list_users add_vhost <VHostPath> delete_vhost <VHostPath> list_vhostsset_permissions [-p <VHostPath>] <UserName> <Regexp> <Regexp> <Regexp> clear_permissions [-p <VHostPath>] <UserName> list_permissions [-p <VHostPath>] list_user_permissions <UserName> list_queues [-p <VHostPath>] [<QueueInfoItem> ...] list_exchanges [-p <VHostPath>] [<ExchangeInfoItem> ...] list_bindings [-p <VHostPath>] list_connections [<ConnectionInfoItem> ...]
create database easyrec; #为easyrec初始化用户名跟密码grant index, create, select, insert, update, drop, delete, alter, lock tables on easyrec.* to 'jinxudong'@'localhost' identified by 'jinxudong'; http://easyrec.sourceforge.net/wiki/index.php?title=JavaScript_API_v0.98
<!-- 配置邮件消息队列监听 --> <bean id="maillistener" class="cn.xdf.wlyy.listener.Maillistener" name="maillistener"> <property name="mailManager" ref="mailManager"></property> <property name="msgLogService" ref="msgLogService"></property> </bean> <!-- 配置短信消息队列监听 --> <bean id="smslistener" class="cn.xdf.wlyy.listener.SmsListener" name="smslistener"> <property name="deptService" ref="deptService"></property> <property name="msgLogService" ref="msgLogService"></property> </bean> <!-- 定义邮件队列 --> <rabbit:queue id="mail_queue" durable="true" auto-delete="false" exclusive="false" name="mail_queue" /> <!-- 定义短信队列 --> <rabbit:queue id="sms_queue" durable="true" auto-delete="false" exclusive="false" name="sms_queue" /> <!-- 定义mq 地址,端口,用户, 密码 --> <rabbit:connection-factory id="connectionFactory" host="${mq.host}" port="${mq.port}" username="${mq.username}" password="${mq.password}" /> <rabbit:admin connection-factory="connectionFactory" id="myadmin" /> <!-- 定义rabbit template发送/接受消息 --> <rabbit:template id="amqpTemplate" connection-factory="connectionFactory"></rabbit:template> <!-- 监听邮件队列 --> <rabbit:listener-container connection-factory="connectionFactory" acknowledge="auto"> <rabbit:listener queues="mail_queue" ref="maillistener" /> </rabbit:listener-container> <!-- 监听短信队列 --> <rabbit:listener-container connection-factory="connectionFactory" acknowledge="auto"> <rabbit:listener queues="sms_queue" ref="smslistener" /> </rabbit:listener-container> //监听方式实时处理消息 public class Maillistener implements MessageListener { @Override public void onMessage(Message message) {//业务逻辑 }
//单例模式 private static Settings getSettingInstance(){ if(settings==null){ synchronized (Settings.class) { if(settings==null){ settings = ImmutableSettings.settingsBuilder() // client.transport.sniff=true // 客户端嗅探整个集群的状态,把集群中其它机器的ip地址自动添加到客户端中,并且自动发现新加入集群的机器 .put("client.transport.sniff", true).put("client", true)// 仅作为客户端连接 .put("data", false).put("cluster.name", clustername)// 集群名称 .build(); } } } return settings; } //单例模式 private static TransportClient client; private static TransportClient getIstance() { if (client == null) { //同步代码块(对象未初始化时,使用同步代码块,保证多线程访问时对象在第一次创建后,不再重复被创建) synchronized (TransportClient.class) { if (client == null) { client = new TransportClient(getSettingInstance()).addTransportAddress(new InetSocketTransportAddress(hostname, Integer.parseInt(port1)))// TCP // 连接地址 .addTransportAddress(new InetSocketTransportAddress(hostname, Integer.parseInt(port2))); } } } return client; } /** * 创建索引 写入elasticsearch * * @param jsonlist * 要创建索引的jsonlist数据 */ public static void createIndex(List<JSONObject> jsonlist) { searchRequestBuilder = getIstance().prepareSearch(index); try { // 创建索引 for (int i = 0; i < jsonlist.size(); i++) { IndexResponse indexResponse = client.prepareIndex(index, type, jsonlist.get(i).getString("id")).setSource(jsonlist.get(i).toString()) .execute().actionGet(); if (indexResponse.isCreated()) { System.out.println("创建成功!"); } else { System.out.println("创建失败!"); } } } catch (Exception e) { e.printStackTrace(); } }
$("p:eq(0)") :表p标签的第一个元素 $("p:eq(1)") :表p标签的第二个元素
http://blog.csdn.net/javachannel/article/details/752437/
package cn.xdf.wlyy.solr.utils; import java.util.ArrayList;import java.util.HashMap;import java.util.List;import java.util.Map;import java.util.ResourceBundle;import java.util.concurrent.ExecutionException; import org.apache.commons.lang.StringUtils;import org.apache.log4j.Logger;import org.elasticsearch.action.admin.indices.delete.DeleteIndexResponse;import org.elasticsearch.action.delete.DeleteResponse;import org.elasticsearch.action.index.IndexResponse;import org.elasticsearch.action.search.SearchRequestBuilder;import org.elasticsearch.action.search.SearchResponse;import org.elasticsearch.action.update.UpdateRequest;import org.elasticsearch.action.update.UpdateResponse;import org.elasticsearch.client.transport.TransportClient;import org.elasticsearch.common.settings.ImmutableSettings;import org.elasticsearch.common.settings.Settings;import org.elasticsearch.common.text.Text;import org.elasticsearch.common.transport.InetSocketTransportAddress;import org.elasticsearch.index.query.BoolQueryBuilder;import org.elasticsearch.index.query.QueryBuilder;import org.elasticsearch.index.query.QueryBuilders;import org.elasticsearch.search.SearchHit;import org.elasticsearch.search.SearchHits;import org.elasticsearch.search.highlight.HighlightField; import com.alibaba.fastjson.JSONObject; import cn.xdf.wlyy.bbyh.vo.SearchParVo;import cn.xdf.wlyy.utils.PagedResult; /** * Title. <br> * ElasticSearch工具类. * <p> * Copyright: Copyright (c) 2017年7月7日 下午1:09:36 * <p> * <p> * Author: jinxudong@xdf.cn * <p> * Version: 1.0 * <p> *//** * Title. <br> * ElasticSearch 工具类. * <p> * Copyright: Copyright (c) 2017年7月10日 上午9:05:30 * <p> * 2016-3-21 Company: 北京新东方学校 * <p> * Author: jinxudong@xdf.cn * <p> * Version: 1.0 * <p> */public class EsUtil { /** 启用日志 */ private static Logger logger = Logger.getLogger(EsUtil.class); private static TransportClient client; private static ResourceBundle resource = ResourceBundle.getBundle("es"); /** 索引库名称 */ private static String index = resource.getString("es.db"); /** 索引表名称 */ private static String type = resource.getString("es.table"); /** 集群分片数 */ private static String shards_str = resource.getString("es.shards"); private static Integer shards = Integer.parseInt(shards_str); private static SearchRequestBuilder searchRequestBuilder; // 获取集群名称 private static String clustername = resource.getString("es.cluster.name"); // 获取集群ip/域名 private static String hostname = resource.getString("es.hostname"); // 获取第一个节点端口号 private static String port1 = resource.getString("es.port.one"); // 获取第一个节点端口号 private static String port2 = resource.getString("es.port.two"); private static Settings settings; //es加载一次 避免多次链接造成内存溢出 或者使用单例模式 static { settings = ImmutableSettings.settingsBuilder() // client.transport.sniff=true // 客户端嗅探整个集群的状态,把集群中其它机器的ip地址自动添加到客户端中,并且自动发现新加入集群的机器 .put("client.transport.sniff", true).put("client", true)// 仅作为客户端连接 .put("data", false).put("cluster.name", clustername)// 集群名称 .build(); client = new TransportClient(settings).addTransportAddress(new InetSocketTransportAddress(hostname, Integer.parseInt(port1)))// TCP // 连接地址 .addTransportAddress(new InetSocketTransportAddress(hostname, Integer.parseInt(port2))); } /** * 创建索引 写入elasticsearch * * @param jsonlist * 要创建索引的jsonlist数据 */ public static void createIndex(List<JSONObject> jsonlist) { try { // 创建索引 for (int i = 0; i < jsonlist.size(); i++) { IndexResponse indexResponse = client.prepareIndex(index, type, jsonlist.get(i).getString("id")).setSource(jsonlist.get(i).toString()) .execute().actionGet(); if (indexResponse.isCreated()) { logger.info("写入索引库成功..."); } else { logger.info("写入索引库失败..."); } } } catch (Exception e) { logger.error(e); } } /** * 根据索引id删除 * * @param uids * 索引id */ public static void deleteIndex(List<String> uids) { for (int i = 0; i < uids.size(); i++) { DeleteResponse dResponse = client.prepareDelete(index, type, uids.get(i)).execute().actionGet(); if (dResponse.isContextEmpty()) { logger.info(uids.get(i) + "删除成功..."); } else { logger.info(uids.get(i) + "删除失败..."); } } } /** * 根据索引名称删除 * * @param indexName * 索引库名称 */ public static void deleteIndexLib(String indexName) { DeleteIndexResponse dResponse = client.admin().indices().prepareDelete(indexName).execute().actionGet(); if (dResponse.isContextEmpty()) { logger.info(indexName + "删除成功。"); } else { logger.info(indexName + "删除失败"); } } /** * @param uid * 要更新的索引id * @param json * 要更新的json数据 */ public static void updateIndex(String uid, JSONObject json) { UpdateRequest updateRequest = new UpdateRequest(); updateRequest.index(index); updateRequest.type(type); updateRequest.id(uid); updateRequest.doc(json); try { UpdateResponse updateResponse = client.update(updateRequest).get(); if (!updateResponse.isCreated()) { logger.info(uid + "更新成功"); } else { logger.info(uid + "更新失败"); } } catch (InterruptedException e) { // TODO Auto-generated catch block logger.error(e); } catch (ExecutionException e) { // TODO Auto-generated catch block logger.error(e); } } /** * 多字段查询 * * @param pageSize * 页面大小 * @param keyword * 查询关键字 * @param columns * 不确定多个索引字段 * @return map集合 map.put("dispage", disPage); map.put("jsonlist", * resultlist); */ public static Map<String, Object> query(Integer pageSize, Integer currentNo, SearchParVo vo, String... columns) { searchRequestBuilder = client.prepareSearch(index); HashMap<String, Object> map = new HashMap<String, Object>(); // 搜索结果集 List<JSONObject> resultlist = new ArrayList<JSONObject>(); QueryBuilder qb = null; QueryBuilder qb_state = null; QueryBuilder qb_dept = null; QueryBuilder qb_item = null; QueryBuilder qb_subject = null; QueryBuilder qb_regtype = null; QueryBuilder qb_disway = null; BoolQueryBuilder querybuilder = QueryBuilders.boolQuery(); if (StringUtils.isNotBlank(vo.getTitle())) { qb = QueryBuilders.multiMatchQuery(vo.getTitle(), columns); querybuilder.must(qb); // 必要条件 查询需要显示的内容 qb_state = QueryBuilders.matchPhraseQuery("state", "1"); querybuilder.must(qb_state); if (StringUtils.isNotBlank(vo.getDid())) { qb_dept = QueryBuilders.matchPhraseQuery("d_id", vo.getDid()); querybuilder.must(qb_dept); } if (StringUtils.isNotBlank(vo.getIid())) { qb_item = QueryBuilders.matchPhraseQuery("i_id", vo.getIid()); querybuilder.must(qb_item); } if (StringUtils.isNotBlank(vo.getSid())) { qb_subject = QueryBuilders.matchPhraseQuery("s_id", vo.getSid()); querybuilder.must(qb_subject); } if (StringUtils.isNotBlank(vo.getRegtype())) { qb_regtype = QueryBuilders.matchPhraseQuery("registration_type", vo.getRegtype()); querybuilder.must(qb_regtype); } if (StringUtils.isNotBlank(vo.getDisway())) { qb_disway = QueryBuilders.matchPhraseQuery("discount_way", vo.getDisway()); querybuilder.must(qb_disway); } } else { qb = QueryBuilders.matchAllQuery(); querybuilder.must(qb); } searchRequestBuilder.setQuery(querybuilder); SearchResponse response = searchRequestBuilder.execute().actionGet(); SearchHits hits = response.getHits(); // 记录总数 long total = hits.totalHits(); // 计算总页数 int totalPages = totalPage(1, pageSize, (int) total); // 每次开始的位置 int start = (currentNo - 1) * pageSize; // 添加高亮字段 searchRequestBuilder.addHighlightedField("title"); searchRequestBuilder.setHighlighterPreTags("<span style=\"color:red\">"); searchRequestBuilder.setHighlighterPostTags("</span>"); response = searchRequestBuilder.setFrom(start).setSize(pageSize).execute().actionGet(); SearchHit[] searchHits = response.getHits().hits(); // 封装分页对象信息 PagedResult disPage = new PagedResult(); disPage.setTotal(total); disPage.setPages(totalPages); disPage.setPageNo(currentNo); disPage.setPageSize(pageSize); for (SearchHit searchHit : searchHits) { Map<String, Object> dd = searchHit.getSource(); JSONObject json = (JSONObject) JSONObject.toJSON(dd); // 从设定的高亮域中取得指定域 Map<String, HighlightField> result = searchHit.highlightFields(); HighlightField titleField = result.get("title"); if (titleField != null) { // 取得定义的高亮标签 Text[] titleTexts = titleField.fragments(); // 为title串值增加自定义的高亮标签 String title = ""; for (Text text : titleTexts) { title += text; } json.put("title", title); } resultlist.add(json); } map.put("dispage", disPage); map.put("jsonlist", resultlist); return map; } /** * @param currentNo * 当前页 * @param pageSize * 一页显示多少条记录 * @param totalNum * 总记录 * @return */ public static int totalPage(Integer currentNo, Integer pageSize, int totalNum) { int totalPages = 0; if (totalNum % pageSize == 0) { totalPages = totalNum / pageSize; } else { totalPages = totalNum / pageSize + 1; } return totalPages; }}
No1. Ubuntu下MySQL 安装 sudo apt-get install mysql-server mysql-client 记得root密码别忘了。 No2. 验证Mysql安装 sudo service mysql restart No3. 设置Mysql远程IP访问 /etc/MySQL/my.cnf找到bind-address = 127.0.0.1,注释掉 No4. 设置字符集、排序规则等。 打开/etc/mysql/my.cnf,在[mysqld]后添加character-set-server=utf8 No5. 设置root远程访问 mysql -u root -p ->GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'password' WITH GRANT OPTION; ->FLUSH PRIVILEGES
tail -1000 catalina.out | grep Exception
DateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"); try { Date d1 = df.parse("2004-03-26 13:31:40"); Date d2 = df.parse("2004-01-02 11:30:24"); long diff = d1.getTime() - d2.getTime();//这样得到的差值是微秒级别 long days = diff / (1000 * 60 * 60 * 24); long hours = (diff-days*(1000 * 60 * 60 * 24))/(1000* 60 * 60); long minutes = (diff-days*(1000 * 60 * 60 * 24)-hours*(1000* 60 * 60))/(1000* 60); System.out.println(""+days+"天"+hours+"小时"+minutes+"分"); }catch (Exception e) { }
main方法中实例化内部类报错: public class TestGson { public static void main(String[] args) { Gson gson=new Gson();TestGson testgson=new TestGson(); Student student = testgson.new Student(); student.setId(2); student.setName("金旭东"); String strstudent = gson.toJson(student); Object students = JSONObject.toJSON(student); System.out.println("gosn="+strstudent+"fastjson="+students.toString()); } class Student{ private int id; private String name; public int getId() { return id; } public void setId(int id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } }}
ps:本文只是简单一个整合介绍,属于抛砖引玉,具体实现还需大家深入研究哈.. 1.首先是生产者配置 ? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:rabbit="http://www.springframework.org/schema/rabbit" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/rabbit http://www.springframework.org/schema/rabbit/spring-rabbit-1.0.xsd"> <!-- 连接服务配置 --> <rabbit:connection-factory id="connectionFactory" host="localhost" username="guest" password="guest" port="5672" /> <rabbit:admin connection-factory="connectionFactory"/> <!-- queue 队列声明--> <rabbit:queue id="queue_one" durable="true" auto-delete="false" exclusive="false" name="queue_one"/> <!-- exchange queue binging key 绑定 --> <rabbit:direct-exchange name="my-mq-exchange" durable="true" auto-delete="false" id="my-mq-exchange"> <rabbit:bindings> <rabbit:binding queue="queue_one" key="queue_one_key"/> </rabbit:bindings> </rabbit:direct-exchange> <-- spring amqp默认的是jackson 的一个插件,目的将生产者生产的数据转换为json存入消息队列,由于fastjson的速度快于jackson,这里替换为fastjson的一个实现 --> <bean id="jsonMessageConverter" class="mq.convert.FastJsonMessageConverter"></bean> <-- spring template声明--> <rabbit:template exchange="my-mq-exchange" id="amqpTemplate" connection-factory="connectionFactory" message-converter="jsonMessageConverter"/> </beans> 2.fastjson messageconver插件实现 ? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import org.springframework.amqp.core.Message; import org.springframework.amqp.core.MessageProperties; import org.springframework.amqp.support.converter.AbstractMessageConverter; import org.springframework.amqp.support.converter.MessageConversionException; import fe.json.FastJson; public class FastJsonMessageConverter extends AbstractMessageConverter { private static Log log = LogFactory.getLog(FastJsonMessageConverter.class); public static final String DEFAULT_CHARSET = "UTF-8"; private volatile String defaultCharset = DEFAULT_CHARSET; public FastJsonMessageConverter() { super(); //init(); } public void setDefaultCharset(String defaultCharset) { this.defaultCharset = (defaultCharset != null) ? defaultCharset : DEFAULT_CHARSET; } public Object fromMessage(Message message) throws MessageConversionException { return null; } public <T> T fromMessage(Message message,T t) { String json = ""; try { json = new String(message.getBody(),defaultCharset); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } return (T) FastJson.fromJson(json, t.getClass()); } protected Message createMessage(Object objectToConvert, MessageProperties messageProperties) throws MessageConversionException { byte[] bytes = null; try { String jsonString = FastJson.toJson(objectToConvert); bytes = jsonString.getBytes(this.defaultCharset); } catch (UnsupportedEncodingException e) { throw new MessageConversionException( "Failed to convert Message content", e); } messageProperties.setContentType(MessageProperties.CONTENT_TYPE_JSON); messageProperties.setContentEncoding(this.defaultCharset); if (bytes != null) { messageProperties.setContentLength(bytes.length); } return new Message(bytes, messageProperties); } } 3.生产者端调用 ? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 import java.util.List; import org.springframework.amqp.core.AmqpTemplate; public class MyMqGatway { @Autowired private AmqpTemplate amqpTemplate; public void sendDataToCrQueue(Object obj) { amqpTemplate.convertAndSend("queue_one_key", obj); } } 4.消费者端配置(与生产者端大同小异) ? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:rabbit="http://www.springframework.org/schema/rabbit" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://www.springframework.org/schema/rabbit http://www.springframework.org/schema/rabbit/spring-rabbit-1.0.xsd"> <!-- 连接服务配置 --> <rabbit:connection-factory id="connectionFactory" host="localhost" username="guest" password="guest" port="5672" /> <rabbit:admin connection-factory="connectionFactory"/> <!-- queue 队列声明--> <rabbit:queue id="queue_one" durable="true" auto-delete="false" exclusive="false" name="queue_one"/> <!-- exchange queue binging key 绑定 --> <rabbit:direct-exchange name="my-mq-exchange" durable="true" auto-delete="false" id="my-mq-exchange"> <rabbit:bindings> <rabbit:binding queue="queue_one" key="queue_one_key"/> </rabbit:bindings> </rabbit:direct-exchange> <!-- queue litener 观察 监听模式 当有消息到达时会通知监听在对应的队列上的监听对象--> <rabbit:listener-container connection-factory="connectionFactory" acknowledge="auto" task-executor="taskExecutor"> <rabbit:listener queues="queue_one" ref="queueOneLitener"/> </rabbit:listener-container> </beans> 5.消费者端调用 ? 1 2 3 4 5 6 7 8 9 import org.springframework.amqp.core.Message; import org.springframework.amqp.core.MessageListener; public class QueueOneLitener implements MessageListener{ @Override public void onMessage(Message message) { System.out.println(" data :" + message.getBody()); } } 6.由于消费端当队列有数据到达时,对应监听的对象就会被通知到,无法做到批量获取,批量入库,因此可以在消费端缓存一个临时队列,将mq取出来的数据存入本地队列,后台线程定时批量处理即可 转自:http://blog.csdn.net/l192168134/article/details/51210188
https://geewu.gitbooks.io/rabbitmq-quick/content/RabbitMQ%E5%9F%BA%E7%A1%80%E6%93%8D%E4%BD%9C.html
var names = yunying_name.split(","); for (var i = 0; i < names.length; i++) { names[i] = names[i].trim(); } $("#yunying_name").val(names); names=["a","b","c"];
添加: PropertiesUtils pro = new PropertiesUtils();String path = pro.load("solr.properties", "solr.Url");SolrServer solrServer = new HttpSolrServer(path); SolrInputDocument document = new SolrInputDocument(); document.addField("uid", data.getId()); document.addField("title", data.getYhtitle()); document.addField("startTime", data.getStartdate()); document.addField("endTime", data.getEnddate()); UpdateResponse response = solrServer.add(document); // 提交 solrServer.commit(); 删除: PropertiesUtils pro = new PropertiesUtils();String path = pro.load("solr.properties", "solr.Url");SolrServer solrServer = new HttpSolrServer(path); if (ListUtils.isNotBlank(ids)) {UpdateResponse d = solrServer.deleteById(ids); UpdateResponse ds = solrServer.deleteByQuery("*"); } if (StringUtils.isNotBlank(id)) { UpdateResponse d = solrServer.deleteById(id); } solrServer.commit(); 搜索: 搜索: SolrQuery query=new SolrQuery();//查询条件query.set("q","par"); //分页SolrDocumentList doc=query.getResults()//结果集 返回documentList对象doc.getStart()//开始记录doc.getNumFound()//总记录数 solrServer.query(query);
Criteria(条件查询接口) // 1.简单查询 List<Customer> list = session.createCriteria(Customer.class).list(); // 2.条件查询: Criteria criteria = session.createCriteria(Customer.class); criteria.add(Restrictions.eq("name","芙蓉")); List<Customer> list = criteria.list(); // 3.分页查询: Criteria criteria = session.createCriteria(Customer.class); criteria.setFirstResult(3); criteria.setMaxResults(3); List<Customer> list = criteria.list();
2020年04月