
脚踏实地,仰望天空. https://talenhao.github.io/
默认sudo的过期时间过短,经常要输入密码. htf@linux-rzt3:~> sudo visudo Defaults env_reset,timestamp_timeout=20 在后面添加timestamp_timeout参数,数值是分钟.
git log 默认使用less 显示结果,有时觉得不是很方便 可以使用直接打印到屏幕的方式,使用cat代替
Django的密码发信人是'webmaster@localhost', 参见官方文档settings.py DEFAULT_FROM_EMAIL? Default: 'webmaster@localhost' Default email address to use for various automated correspondence from the site manager(s). This doesn’t include error messages sent to ADMINS and MANAGERS; for that, see SERVER_EMAIL. 在使用Django.contrib.auth.views时会出现发信人与认证用户名不匹配,报553错误. 解决办法: 修改django默认发信人,在project的settings.py中添加
1.安装mongodb-server [alerta@SUSE ~]$ curl -O https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel62-3.4.6.tgz [alerta@SUSE ~]$ tar zxvf mongodb-linux-x86_64-rhel62-3.4.6.tgz [alerta@SUSE mongodb-linux-x86_64-rhel62-3.4.6]$ cd mongodb-linux-x86_64-rhel62-3.4.6 && mkdir data [alerta@SUSE ~]$ nohup ./mongodb-linux-x86_64-rhel62-3.4.6/bin/mongod --dbpath /usr/local/alerta/mongodb-linux-x86_64-rhel62-3.4.6/data/ & 安装python3.6版本 [alerta@SUSE ~]$ wget https://www.python.org/ftp/python/3.6.1/Python-3.6.1.tar.xz [alerta@SUSE ~]$ tar xvf Python-3.6.1.tar.xz [alerta@SUSE Python-3.6.1]$ ./configure --prefix=/usr/local/alerta/python3.6.1 && make && make install 安装alerta-server [alerta@SUSE bin]$ ./pip3 install alerta-server Collecting alerta-server Downloading alerta-server-4.9.6.tar.gz (44kB) 100% |████████████████████████████████| 51kB 187kB/s Collecting Flask (from alerta-server) Downloading Flask-0.12.2-py2.py3-none-any.whl (83kB) 100% |████████████████████████████████| 92kB 130kB/s Collecting Flask-Cors>=3.0.2 (from alerta-server) Downloading Flask_Cors-3.0.3-py2.py3-none-any.whl Collecting pymongo>=3.0 (from alerta-server) Downloading pymongo-3.4.0.tar.gz (583kB) 100% |████████████████████████████████| 583kB 55kB/s Collecting argparse (from alerta-server) Downloading argparse-1.4.0-py2.py3-none-any.whl Collecting requests (from alerta-server) Downloading requests-2.18.1-py2.py3-none-any.whl (88kB) 100% |████████████████████████████████| 92kB 35kB/s Collecting python-dateutil (from alerta-server) Downloading python_dateutil-2.6.0-py2.py3-none-any.whl (194kB) 100% |████████████████████████████████| 194kB 19kB/s Collecting pytz (from alerta-server) Downloading pytz-2017.2-py2.py3-none-any.whl (484kB) 100% |████████████████████████████████| 491kB 12kB/s Collecting PyJWT (from alerta-server) Downloading PyJWT-1.5.2-py2.py3-none-any.whl Collecting bcrypt (from alerta-server) Downloading bcrypt-3.1.3-cp36-cp36m-manylinux1_x86_64.whl (54kB) 100% |████████████████████████████████| 61kB 9.9kB/s Collecting Werkzeug>=0.7 (from Flask->alerta-server) Downloading Werkzeug-0.12.2-py2.py3-none-any.whl (312kB) 100% |████████████████████████████████| 317kB 14kB/s Collecting click>=2.0 (from Flask->alerta-server) Downloading click-6.7-py2.py3-none-any.whl (71kB) 100% |████████████████████████████████| 71kB 15kB/s Collecting itsdangerous>=0.21 (from Flask->alerta-server) Downloading itsdangerous-0.24.tar.gz (46kB) 100% |████████████████████████████████| 51kB 25kB/s Collecting Jinja2>=2.4 (from Flask->alerta-server) Downloading Jinja2-2.9.6-py2.py3-none-any.whl (340kB) 100% |████████████████████████████████| 348kB 27kB/s Collecting Six (from Flask-Cors>=3.0.2->alerta-server) Downloading six-1.10.0-py2.py3-none-any.whl Collecting idna=2.5 (from requests->alerta-server) Downloading idna-2.5-py2.py3-none-any.whl (55kB) 100% |████████████████████████████████| 61kB 25kB/s Collecting urllib3=1.21.1 (from requests->alerta-server) Downloading urllib3-1.21.1-py2.py3-none-any.whl (131kB) 100% |████████████████████████████████| 133kB 13kB/s Collecting certifi>=2017.4.17 (from requests->alerta-server) Downloading certifi-2017.4.17-py2.py3-none-any.whl (375kB) 100% |████████████████████████████████| 378kB 16kB/s Collecting chardet=3.0.2 (from requests->alerta-server) Downloading chardet-3.0.4-py2.py3-none-any.whl (133kB) 100% |████████████████████████████████| 143kB 8.9kB/s Collecting cffi>=1.1 (from bcrypt->alerta-server) Downloading cffi-1.10.0-cp36-cp36m-manylinux1_x86_64.whl (406kB) 100% |████████████████████████████████| 409kB 12kB/s Collecting MarkupSafe>=0.23 (from Jinja2>=2.4->Flask->alerta-server) Downloading MarkupSafe-1.0.tar.gz Collecting pycparser (from cffi>=1.1->bcrypt->alerta-server) Downloading pycparser-2.18.tar.gz (245kB) 100% |████████████████████████████████| 256kB 26kB/s Installing collected packages: Werkzeug, click, itsdangerous, MarkupSafe, Jinja2, Flask, Six, Flask-Cors, pymongo, argparse, idna, urllib3, certifi, chardet, requests, python-dateutil, pytz, PyJWT, pycparser, cffi, bcrypt, alerta-server Running setup.py install for itsdangerous ... done Running setup.py install for MarkupSafe ... done Running setup.py install for pymongo ... done Running setup.py install for pycparser ... done Running setup.py install for alerta-server ... done Successfully installed Flask-0.12.2 Flask-Cors-3.0.3 Jinja2-2.9.6 MarkupSafe-1.0 PyJWT-1.5.2 Six-1.10.0 Werkzeug-0.12.2 alerta-server-4.9.6 argparse-1.4.0 bcrypt-3.1.3 certifi-2017.4.17 cffi-1.10.0 chardet-3.0.4 click-6.7 idna-2.5 itsdangerous-0.24 pycparser-2.18 pymongo-3.4.0 python-dateutil-2.6.0 pytz-2017.2 requests-2.18.1 urllib3-1.21.1 使用uwsgi部署alerta-server,内置的alertad在测试时使用,上线不建议使用。 [alerta@SUSE ~]$ cat wsgi.py from alerta.app import app [alerta@SUSE ~]$ cat uwsgi.ini [uwsgi] chdir = /usr/local/alerta mount = /api=wsgi.py callable = app manage-script-name = true master = true processes = 5 logger = syslog:alertad socket = /usr/local/alerta/uwsgi.sock chmod-socket = 664 uid = alerta gid = alerta vacuum = true die-on-term = true 启动 [alerta@SUSE ~]$ nohup uwsgi --ini uwsgi.ini & [1] 18626 [uWSGI] getting INI configuration from uwsgi.ini 安装nginx web服务器 wget http://nginx.org/download/nginx-1.12.0.tar.gz tar zxvf nginx-1.12.0.tar.gz ./configure --prefix=/usr/local/nginx-1.5.1 --with-http_ssl_module --with-http_spdy_module --with-http_stub_status_module --with-pcre make && make install 注释掉原来的server,添加 include vhosts/*.conf; [alerta@SUSE ~]$ cat ~/nginx1.12/conf/vhosts/alerta.conf server { listen 28880; server_name 192.168.1.228 ; location /api { try_files $uri @api; } location @api { include uwsgi_params; uwsgi_pass unix:/usr/local/alerta/uwsgi.sock; proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location / { root /usr/local/alerta/angular-alerta-webui/app/; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/local/nginx1.12/nginx/html; } } ./nginx -t ./nginx 启动服务 安装alerta-webui前端 [alerta@SUSE ~]$ git clone https://github.com/alerta/angular-alerta-webui.git [alerta@SUSE ~]$ vim /usr/local/alerta/angular-alerta-webui/app/config.js 'use strict'; angular.module('config', []) .constant('config', { 'endpoint' : "/api", 'provider' : "basic", // google, github, gitlab, keycloak or basic 'client_id' : "INSERT-CLIENT-ID-HERE", 'github_url' : null, // replace with your enterprise github server 'gitlab_url' : "https://gitlab.com", // replace with your gitlab server 'keycloak_url': "https://keycloak.example.org", // replace with your keycloak server 'keycloak_realm': "master", // replace with your keycloak realm 'colors' : {}, // use default colors 'severity' : {}, // use default severity codes 'audio' : {}, // no audio 'tracking_id' : "" // Google Analytics tracking ID eg. UA-NNNNNN-N }); 安装alerta客户端测试 发送2条测试,显示1条重复。 [alerta@SUSE ~]$ pip install alerta [alerta@SUSE ~]$ alerta --endpoint-url http://192.168.1.228:28880/api send --resource webserver01 --event down --environment Production --service Website01 --severity major --text "Web server 01 is down." --value ERROR 6e28d3a5-f764-452b-9916-f6d54c533402 (indeterminate -> major) [alerta@SUSE ~]$ alerta --endpoint-url http://192.168.1.228:28880/api send --resource webserver01 --event down --environment Production --service Website01 --severity major --text "Web server 01 is down." --value ERROR 6e28d3a5-f764-452b-9916-f6d54c533402 (1 duplicates)
free是linux系统上常用的查看内存的命令,新版本的free添加了 -/+ buffer/cache一行 它们显示的值的计算方式如下: htf@linux-rzt3:~> free -V free from procps-ng 3.3.9 htf@linux-rzt3:~> free total used free shared buffers cached Mem: 7897172 4258540 3638632 302260 3876 1662412 -/+ buffers/cache: 2592252 5304920 Swap: 2106364 284284 1822080 htf@linux-rzt3:~> python Python 2.7.13 (default, Mar 22 2017, 12:31:17) [GCC] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> 4258540 - 3876 - 1662412 2592252 >>> 3638632 + 3876 + 1662412 5304920 由上面的显示及计算可以看出。 可用内存显示为两种状态:包含buffer/cache的使用量,不包含buffer/cache的使用量 下面分别给出计算公式: 不包含buffer/cache的使用量:- buffer/cache 2592252 = 4258540 - 3876 - 1662412 剩余内存 = used - buffer - cache 包含buffer/cache的使用量: + buffer/cache 5304920 = 3638632 + 3876 + 1662412 剩余内存 = free + buffer + cache
zabbix自带了‘DejaVuSans.ttf’字体文件,但无法显示中文,如图: 可以在linux系统中安装中文字体 [root@localhost etc]# yum install google-noto-sans-cjk-fonts.noarch 本以为zabbix是读取系统字体列表,没想到只是读取zabbix站点目录下的字体文件名 我的字体位于/usr/local/zabbix3.2.6/web/zabbix/fonts 于是只好直接替换DejaVuSans.ttf [root@localhost fonts]# cp /usr/share/fonts/google-noto/NotoSansCJK-Regular.ttc DejaVuSans.ttf /usr/local/zabbix3.2.6/web/zabbix/include/defines.inc.php中的配置文件不需要修改了。
我试图在openSUSE 42.2系统的根分区创建一个swap文件,却无法正常挂载。 经过查询应该是btrfs系统这种类型的文件系统不支持swap文件。 另外还有一个btrfs-swapon的项目可以在btrfs上挂载swap文件,但文档里说不太适合在copy-on-write文件系统中创建swap,除非不得已,不建议使用。Keep in mind, that a copy-on-write file system is not the best choice to use a swap file。 talen@opensuse:/> sudo dd if=/dev/zero of=/swapfile bs=1M count=8192 8192+0 records in 8192+0 records out 8589934592 bytes (8.6 GB, 8.0 GiB) copied, 15.2733 s, 562 MB/s talen@opensuse:/> sudo mkswap /swapfile mkswap: /swapfile: insecure permissions 0644, 0600 suggested. Setting up swapspace version 1, size = 8 GiB (8589930496 bytes) no label, UUID=58762fa9-d15f-4790-ad12-bbafa2f93de0 talen@opensuse:/> sudo swapon /swapfile swapon: /swapfile: insecure permissions 0644, 0600 suggested. swapon: /swapfile: swapon failed: Invalid argument talen@opensuse:/> sudo chmod 0600 /swapfile talen@opensuse:/> sudo swapon /swapfileswapon: /swapfile: swapon failed: Invalid argument 参考: https://github.com/sebastian-philipp/btrfs-swapon https://superuser.com/questions/539287/swapon-failed-invalid-argument-on-a-linux-system-with-btrfs-filesystem
logstash配置文件如下: 点击(此处)折叠或打开 input{ kafka{ bootstrap_servers => "xxxx:9092,xxxx:9092,dw72.xxxx.:9092,...." group_id => "xxxx_service_server_logstash" topics => ["xxxx_service_server_error", "xxxx_service_server_runtime"] auto_offset_reset => latest codec => "json" consumer_threads => 10 auto_commit_interval_ms => 500 } } filter { grok { #patterns_dir => ["./patterns"] match => { "message" => "%{TIMESTAMP_ISO8601:logdatetime}" } } date { match => [ "logdatetime", "yyyy-MM-dd HH:mm:ss"] target => "@timestamp" # timezone => "Asia/Shanghai" timezone => "+00:00" locale => "en" } } output{ elasticsearch{ hosts => ["192.168.1.89:9200"] action => "index" index => "%{[type]}-%{+YYYY.MM.dd}" flush_size => 8000 } } 主要关注两个问题: 1.event的timestamp问题,中国区会显示比实际的时间早8个小时(一般不建议修改时间,国际标准,在kibana中做处理。),这里只有国内的服务器,显示为区域时间比较好看,所以我就改了。 2.es 的index根据filebeat中的document_type自动创建index。
Django项目打包 这是目前开发完成的project目录树。我们要打包其中的polls app。 (v_python3.6) thinkt@linux-pw37:~/PycharmProjects/mysite> tree . ├── db.sqlite3 ├── mysite │ ├── __init__.py │ ├── __pycache__ │ │ ├── __init__.cpython-36.pyc │ │ ├── settings.cpython-36.pyc │ │ ├── urls.cpython-36.pyc │ │ └── wsgi.cpython-36.pyc │ ├── settings.py │ ├── urls.py │ └── wsgi.py ├── polls │ ├── admin.py │ ├── apps.py │ ├── __init__.py │ ├── migrations │ │ ├── 0001_initial.py │ │ ├── 0002_auto_20170401_1758.py │ │ ├── __init__.py │ │ └── __pycache__ │ │ ├── 0001_initial.cpython-36.pyc │ │ ├── 0002_auto_20170401_1758.cpython-36.pyc │ │ └── __init__.cpython-36.pyc │ ├── models.py │ ├── __pycache__ │ │ ├── admin.cpython-36.pyc │ │ ├── apps.cpython-36.pyc │ │ ├── __init__.cpython-36.pyc │ │ ├── models.cpython-36.pyc │ │ ├── tests.cpython-36.pyc │ │ ├── urls.cpython-36.pyc │ │ └── views.cpython-36.pyc │ ├── static │ │ └── polls │ │ ├── images │ │ │ └── background.jpg │ │ └── style.css │ ├── templates │ │ └── polls │ │ ├── detail.html │ │ ├── index.html │ │ └── results.html │ ├── tests.py │ ├── urls.py │ └── views.py └── templates └── admin ├── base_site.html └── index.html 复制一份添加前辍头django,让人容易识别这是一个django项目。 (v_python3.6) thinkt@linux-pw37:~/PycharmProjects/mysite> cp polls django-polls -rfv 创建说明文件,方便其它人阅读。 (v_python3.6) thinkt@linux-pw37:~/PycharmProjects/mysite> vim django-polls/README.rst ===== Polls ===== Polls is a simple Django app to conduct Web-based polls. For each question, visitors can choose between a fixed number of answers. Detailed documentation is in the "docs" directory. Quick start ----------- 1. Add "polls" to your INSTALLED_APPS setting like this:: INSTALLED_APPS = [ ... 'polls', ] 2. Include the polls URLconf in your project urls.py like this:: url(r'^polls/', include('polls.urls')), 3. Run `python manage.py migrate` to create the polls models. 4. Start the development server and visit http://127.0.0.1:8000/admin/ to create a poll (you'll need the Admin app enabled). 5. Visit http://127.0.0.1:8000/polls/ to participate in the poll. 创建setup.py脚本,提供详细的关于build,使用等方法。 (v_python3.6) thinkt@linux-pw37:~/PycharmProjects/mysite/django-polls> vim setup.py (v_python3.6) thinkt@linux-pw37:~/PycharmProjects/mysite/django-polls> cat setup.py import os from setuptools import find_packages, setup with open(os.path.join(os.path.dirname(__file__), 'README.rst')) as readme: README = readme.read() # allow setup.py to be run from any path os.chdir(os.path.normpath(os.path.join(os.path.abspath(__file__), os.pardir))) setup( name='django-polls', version='0.1', packages=find_packages(), include_package_data=True, license='BSD License', # example license description='A simple Django app to conduct Web-based polls.', long_description=README, url='https://www.example.com/', author='Your Name', author_email='yourname@example.com', classifiers=[ 'Environment :: Web Environment', 'Framework :: Django', 'Framework :: Django :: X.Y', # replace "X.Y" as appropriate 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', # example license 'Operating System :: OS Independent', 'Programming Language :: Python', # Replace these appropriately if you are stuck on Python 2. 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Topic :: Internet :: WWW/HTTP', 'Topic :: Internet :: WWW/HTTP :: Dynamic Content', ], ) 创建附加文件说明。 v_python3.6) thinkt@linux-pw37:~/PycharmProjects/mysite/django-polls> vim MANIFEST.in (v_python3.6) thinkt@linux-pw37:~/PycharmProjects/mysite/django-polls> cat MANIFEST.in include LICENSE include README.rst recursive-include polls/static * recursive-include polls/templates * recursive-include docs * 附加文件有有个docs目录,创建docs目录,如果有文档等放置在其中。 (v_python3.6) thinkt@linux-pw37:~/PycharmProjects/mysite/django-polls> mkdir docs 开始打包。 (v_python3.6) thinkt@linux-pw37:~/PycharmProjects/mysite/django-polls> python setup.py sdist running sdist running egg_info creating django_polls.egg-info writing django_polls.egg-info/PKG-INFO writing dependency_links to django_polls.egg-info/dependency_links.txt writing top-level names to django_polls.egg-info/top_level.txt writing manifest file 'django_polls.egg-info/SOURCES.txt' reading manifest file 'django_polls.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching 'LICENSE' warning: no files found matching '*' under directory 'polls/static' warning: no files found matching '*' under directory 'polls/templates' warning: no files found matching '*' under directory 'docs' writing manifest file 'django_polls.egg-info/SOURCES.txt' running check creating django-polls-0.1 creating django-polls-0.1/django_polls.egg-info creating django-polls-0.1/migrations copying files to django-polls-0.1... copying MANIFEST.in -> django-polls-0.1 copying README.rst -> django-polls-0.1 copying setup.py -> django-polls-0.1 copying django_polls.egg-info/PKG-INFO -> django-polls-0.1/django_polls.egg-info copying django_polls.egg-info/SOURCES.txt -> django-polls-0.1/django_polls.egg-info copying django_polls.egg-info/dependency_links.txt -> django-polls-0.1/django_polls.egg-info copying django_polls.egg-info/top_level.txt -> django-polls-0.1/django_polls.egg-info copying migrations/0001_initial.py -> django-polls-0.1/migrations copying migrations/0002_auto_20170401_1758.py -> django-polls-0.1/migrations copying migrations/__init__.py -> django-polls-0.1/migrations Writing django-polls-0.1/setup.cfg creating dist Creating tar archive removing 'django-polls-0.1' (and everything under it) 在dist目录下存放了打包好的文件。 (v_python3.6) thinkt@linux-pw37:~/PycharmProjects/mysite/django-polls/dist> ll 总用量 4 -rw-r--r-- 1 thinkt users 2349 4月 18 15:08 django-polls-0.1.tar.gz 使用pip安装本地打包。 pip install --user django-polls/dist/django-polls-0.1.tar.gz 使用pip的uninstall删除 pip uninstall django-polls 参考:https://docs.djangoproject.com/en/1.10/intro/reusable-apps/
首先系统中需要安装vlc或mplayer 使用cvlc,不需要界面,另外配置播放后立即退出。-q表示quiet。 直接赶写mplayer也可以。
Django开发环境搭建 thinkt@linux-pw37:~/.virtualenvs/v_python3.6/bin> ./pip install django thinkt@linux-pw37:~/.virtualenvs/v_python3.6/bin> ./python -m django version 1.10.6 thinkt@linux-pw37:~/.virtualenvs/v_python3.6/bin> ./django-admin startproject mysite thinkt@linux-pw37:~/PycharmProjects/mysite> ~/.virtualenvs/v_python3.6/bin/python manage.py runserver 8001 Performing system checks... System check identified no issues (0 silenced). You have 13 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, contenttypes, sessions. Run 'python manage.py migrate' to apply them. April 01, 2017 - 07:18:28 Django version 1.10.6, using settings 'mysite.settings' Starting development server at http://127.0.0.1:8001/ Quit the server with CONTROL-C. [01/Apr/2017 07:18:40] "GET / HTTP/1.1" 200 1767 Not Found: /favicon.ico [01/Apr/2017 07:18:40] "GET /favicon.ico HTTP/1.1" 404 1936 Not Found: /favicon.ico [01/Apr/2017 07:18:40] "GET /favicon.ico HTTP/1.1" 404 1936 thinkt@linux-pw37:~/PycharmProjects/mysite> ~/.virtualenvs/v_python3.6/bin/python manage.py startapp polls thinkt@linux-pw37:~/PycharmProjects/mysite/polls> ls admin.py apps.py __init__.py migrations models.py tests.py views.py thinkt@linux-pw37:~/PycharmProjects/mysite> ~/.virtualenvs/v_python3.6/bin/python manage.py makemigrations polls Migrations for 'polls': polls/migrations/0001_initial.py: - Create model Choice - Create model Question - Add field question to choice thinkt@linux-pw37:~/PycharmProjects/mysite> ~/.virtualenvs/v_python3.6/bin/python manage.py sqlmigrate polls 0001 BEGIN; -- -- Create model Choice -- CREATE TABLE "polls_choice" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "choice_text" varchar(200) NOT NULL, "vates" integer NOT NULL); -- -- Create model Question -- CREATE TABLE "polls_question" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "question_text" varchar(200) NOT NULL, "pub_date" datetime NOT NULL); -- -- Add field question to choice -- ALTER TABLE "polls_choice" RENAME TO "polls_choice__old"; CREATE TABLE "polls_choice" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "choice_text" varchar(200) NOT NULL, "vates" integer NOT NULL, "question_id" integer NOT NULL REFERENCES "polls_question" ("id")); INSERT INTO "polls_choice" ("id", "choice_text", "vates", "question_id") SELECT "id", "choice_text", "vates", NULL FROM "polls_choice__old"; DROP TABLE "polls_choice__old"; CREATE INDEX "polls_choice_7aa0f6ee" ON "polls_choice" ("question_id"); COMMIT; thinkt@linux-pw37:~/PycharmProjects/mysite> ~/.virtualenvs/v_python3.6/bin/python manage.py check System check identified no issues (0 silenced). thinkt@linux-pw37:~/PycharmProjects/mysite> ~/.virtualenvs/v_python3.6/bin/python manage.py migrate Operations to perform: Apply all migrations: admin, auth, contenttypes, polls, sessions Running migrations: Applying contenttypes.0001_initial... OK Applying auth.0001_initial... OK Applying admin.0001_initial... OK Applying admin.0002_logentry_remove_auto_add... OK Applying contenttypes.0002_remove_content_type_name... OK Applying auth.0002_alter_permission_name_max_length... OK Applying auth.0003_alter_user_email_max_length... OK Applying auth.0004_alter_user_username_opts... OK Applying auth.0005_alter_user_last_login_null... OK Applying auth.0006_require_contenttypes_0002... OK Applying auth.0007_alter_validators_add_error_messages... OK Applying auth.0008_alter_user_username_max_length... OK Applying polls.0001_initial... OK Applying sessions.0001_initial... OK thinkt@linux-pw37:~/PycharmProjects/mysite> ~/.virtualenvs/v_python3.6/bin/python manage.py migrate Operations to perform: Apply all migrations: admin, auth, contenttypes, polls, sessions Running migrations: No migrations to apply. Your models have changes that are not yet reflected in a migration, and so won't be applied. Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them. thinkt@linux-pw37:~/PycharmProjects/mysite> ~/.virtualenvs/v_python3.6/bin/python manage.py makemigrations Did you rename choice.vates to choice.votes (a IntegerField)? [y/N] y Migrations for 'polls': polls/migrations/0002_auto_20170401_1758.py: - Rename field vates on choice to votes thinkt@linux-pw37:~/PycharmProjects/mysite> ~/.virtualenvs/v_python3.6/bin/python manage.py migrate Operations to perform: Apply all migrations: admin, auth, contenttypes, polls, sessions Running migrations: Applying polls.0002_auto_20170401_1758... OK thinkt@linux-pw37:~/PycharmProjects/mysite> ~/.virtualenvs/v_python3.6/bin/python manage.py createsuperuser Username (leave blank to use 'thinkt'): Email address: talenhao@gmail.com Password: Password (again): This password is too short. It must contain at least 8 characters. Password: Password (again): This password is too common. This password is entirely numeric. Password: Password (again): Superuser created successfully.
In [15]: a = [1,2,3,4,5,5,6,7,7] In [16]: a.index(5) Out[16]: 4 In [17]: a.index(6) Out[17]: 6 In [18]: b = collect_common.unique_list(a) In [19]: a.index(5) Out[19]: 4 In [20]: a.index(6) Out[20]: 5 可见list去重后index相应的也进行了变化。
# yum search --showduplicates salt ... salt-2015.5.10-2.el6.noarch : A parallel remote execution system salt-2015.5.10-2.el6.noarch : A parallel remote execution system salt-2016.11.3-1.el6.noarch : A parallel remote execution system ... # yum downgrade salt 降级指定版本后面写查询到的软件包全名。
pytho 2.6 salt state模块pip报AttributeError: 'Requirement' object has no attribute 'project_name' pip install --upgrade pip DEPRECATION: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of pip will drop support for Python 2.6 回退到0.7版本解决。 # pip --version pip 7.1.0 from /usr/lib/python2.6/site-packages (python 2.6)
安装python-devel包解决这个问题 # pip install netifaces DEPRECATION: Python 2.6 is no longer supported by the Python core team, please upgrade your Python. A future version of pip will drop support for Python 2.6 Collecting netifaces Using cached netifaces-0.10.5.tar.gz Installing collected packages: netifaces Running setup.py install for netifaces ... error Complete output from command /usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-2ZS95g/netifaces/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-e1XD5C-record/install-record.txt --single-version-externally-managed --compile: running install running build running build_ext checking for getifaddrs...found. checking for getnameinfo...found. checking for IPv6 socket IOCTLs...not found. checking for optional header files...netash/ash.h netatalk/at.h netax25/ax25.h neteconet/ec.h netipx/ipx.h netpacket/packet.h linux/irda.h linux/atm.h linux/llc.h linux/tipc.h linux/dn.h. checking whether struct sockaddr has a length field...no. checking which sockaddr_xxx structs are defined...at ax25 in in6 ipx un ash ec ll atmpvc atmsvc dn irda llc. checking for routing socket support...no. checking for sysctl(CTL_NET...) support...no. checking for netlink support...yes. will use netlink to read routing table building 'netifaces' extension gcc -pthread -fno-strict-aliasing -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -DNDEBUG -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -D_GNU_SOURCE -fPIC -fwrapv -fPIC -DNETIFACES_VERSION=0.10.5 -DHAVE_GETIFADDRS=1 -DHAVE_GETNAMEINFO=1 -DHAVE_NETASH_ASH_H=1 -DHAVE_NETATALK_AT_H=1 -DHAVE_NETAX25_AX25_H=1 -DHAVE_NETECONET_EC_H=1 -DHAVE_NETIPX_IPX_H=1 -DHAVE_NETPACKET_PACKET_H=1 -DHAVE_LINUX_IRDA_H=1 -DHAVE_LINUX_ATM_H=1 -DHAVE_LINUX_LLC_H=1 -DHAVE_LINUX_TIPC_H=1 -DHAVE_LINUX_DN_H=1 -DHAVE_SOCKADDR_AT=1 -DHAVE_SOCKADDR_AX25=1 -DHAVE_SOCKADDR_IN=1 -DHAVE_SOCKADDR_IN6=1 -DHAVE_SOCKADDR_IPX=1 -DHAVE_SOCKADDR_UN=1 -DHAVE_SOCKADDR_ASH=1 -DHAVE_SOCKADDR_EC=1 -DHAVE_SOCKADDR_LL=1 -DHAVE_SOCKADDR_ATMPVC=1 -DHAVE_SOCKADDR_ATMSVC=1 -DHAVE_SOCKADDR_DN=1 -DHAVE_SOCKADDR_IRDA=1 -DHAVE_SOCKADDR_LLC=1 -DHAVE_PF_NETLINK=1 -I/usr/include/python2.6 -c netifaces.c -o build/temp.linux-x86_64-2.6/netifaces.o netifaces.c:1:20: error: Python.h: No such file or directory netifaces.c: In function ‘string_from_sockaddr’: netifaces.c:341: warning: implicit declaration of function ‘calloc’ netifaces.c:341: warning: incompatible implicit declaration of built-in function ‘calloc’ netifaces.c:344: warning: implicit declaration of function ‘memcpy’ netifaces.c:344: warning: incompatible implicit declaration of built-in function ‘memcpy’ netifaces.c:360: warning: implicit declaration of function ‘free’ netifaces.c:360: warning: incompatible implicit declaration of built-in function ‘free’ netifaces.c:405: warning: implicit declaration of function ‘sprintf’ netifaces.c:405: warning: incompatible implicit declaration of built-in function ‘sprintf’ netifaces.c: In function ‘string_from_netmask’: netifaces.c:491: warning: incompatible implicit declaration of built-in function ‘sprintf’ netifaces.c:493: warning: implicit declaration of function ‘strlen’ netifaces.c:493: warning: incompatible implicit declaration of built-in function ‘strlen’ netifaces.c:494: warning: implicit declaration of function ‘strcpy’ netifaces.c:494: warning: incompatible implicit declaration of built-in function ‘strcpy’ netifaces.c: At top level: netifaces.c:672: error: expected ‘)’ before ‘*’ token netifaces.c:710: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token netifaces.c:1269: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token netifaces.c:1442: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘*’ token netifaces.c:2516: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘methods’ netifaces.c: In function ‘initnetifaces’: netifaces.c:2546: error: ‘PyObject’ undeclared (first use in this function) netifaces.c:2546: error: (Each undeclared identifier is reported only once netifaces.c:2546: error: for each function it appears in.) netifaces.c:2546: error: ‘address_family_dict’ undeclared (first use in this function) netifaces.c:2547: error: ‘m’ undeclared (first use in this function) netifaces.c:2555: warning: implicit declaration of function ‘Py_InitModule3’ netifaces.c:2555: error: ‘methods’ undeclared (first use in this function) netifaces.c:2560: warning: implicit declaration of function ‘PyDict_New’ netifaces.c:2562: warning: implicit declaration of function ‘PyModule_AddIntConstant’ netifaces.c:2563: warning: implicit declaration of function ‘PyDict_SetItem’ netifaces.c:2563: warning: implicit declaration of function ‘PyInt_FromLong’ netifaces.c:2564: warning: implicit declaration of function ‘PyString_FromString’ netifaces.c:2871: warning: implicit declaration of function ‘PyModule_AddObject’ netifaces.c:2879: warning: implicit declaration of function ‘PyModule_AddStringConstant’ error: command 'gcc' failed with exit status 1 ---------------------------------------- Command "/usr/bin/python -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-2ZS95g/netifaces/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-e1XD5C-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-2ZS95g/netifaces/
[root@sys_228 ~]# salt --versions-report Salt Version: Salt: 2015.8.8.2 ... 这个版本的nodegroup会有匹配问题 配置文件的定法上也有些许不同
线上rundeck从2.6.9升级到2.6.10后无法启动,日志也没有打印任务有用的信息,一头雾水。 只有从配置文件开始着手,一看还真发现了一点不同。 我是yum源升级的,在rundeck的配置目录/etc/rundeck下发现profile文件被升级了,对比了一下新产生的文件与旧文件: -rw-r--r-- 1 rundeck rundeck 2907 Nov 15 09:45 profile -rw-r--r-- 1 root root 2038 Nov 15 09:44 profile2016-11-15 -rw-r----- 1 rundeck rundeck 2907 Nov 11 06:25 profile.rpmnew 发现新文件profile.rpmnew添加了好多新的东西。 而启动时还是使用老的配置文件,怀疑是此文件造成启动影响。 于是备份老porfile,使用rpmnew文件替换。 不出所料,成功启动。 点击(此处)折叠或打开 [rundeck@sys rundeck]$ diff -bBr profile2016-11-15 profile.rpmnew 1,2c1,17 RDECK_BASE=/var/lib/rundeck export RDECK_BASE --- > RDECK_INSTALL="${RDECK_INSTALL:-/var/lib/rundeck}" > RDECK_BASE="${RDECK_BASE:-/var/lib/rundeck}" > RDECK_CONFIG="${RDECK_CONFIG:-/etc/rundeck}" > RDECK_SERVER_BASE="${RDECK_SERVER_BASE:-$RDECK_BASE}" > RDECK_SERVER_CONFIG="${RDECK_SERVER_CONFIG:-$RDECK_CONFIG}" > RDECK_SERVER_DATA="${RDECK_SERVER_DATA:-$RDECK_BASE/data}" > RDECK_PROJECTS="${RDECK_PROJECTS:-$RDECK_BASE/projects}" > RUNDECK_TEMPDIR="${RUNDECK_TEMPDIR:-/tmp/rundeck}" > RUNDECK_WORKDIR="${RUNDECK_TEMPDIR:-$RDECK_BASE/work}" > RUNDECK_LOGDIR="${RUNDECK_LOGDIR:-$RDECK_BASE/logs}" > RDECK_JVM_SETTINGS="${RDECK_JVM_SETTINGS:- -Xmx1024m -Xms256m -XX:MaxPermSize=256m -server}" > RDECK_TRUSTSTORE_FILE="${RDECK_TRUSTSTORE_FILE:-$RDECK_CONFIG/ssl/truststore}" > RDECK_TRUSTSTORE_TYPE="${RDECK_TRUSTSTORE_TYPE:-jks}" > JAAS_CONF="${JAAS_CONF:-$RDECK_CONFIG/jaas-loginmodule.conf}" > LOGIN_MODULE="${LOGIN_MODULE:-RDpropertyfilelogin}" > RDECK_HTTP_PORT=${RDECK_HTTP_PORT:-4440} > RDECK_HTTPS_PORT=${RDECK_HTTP_PORT:-4443} 4,5d18 JAVA_CMD=java RUNDECK_TEMPDIR=/tmp/rundeck 7,17c20,21 RDECK_HTTP_PORT=4440 RDECK_HTTPS_PORT=4443 # # If JAVA_HOME is set, then add it to home and set JAVA_CMD to use the version specified in that # path. JAVA_HOME can be set in the rundeck profile. Or set in this file. #JAVA_HOME=path/to/JDK or JRE/install> if [ ! -z $JAVA_HOME ]; then PATH=$PATH:$JAVA_HOME/bin export PATH --- > # If no JAVA_CMD, try to find it in $JAVA_HOME > if [ -z "$JAVA_CMD" ] && [ -n "$JAVA_HOME" ] && [ -x "$JAVA_HOME/bin/java" ] ; then 18a23,26 > PATH=$PATH:$JAVA_HOME/bin > export JAVA_HOME > elif [ -z "$JAVA_CMD" ] ; then > JAVA_CMD=java 21,36c29,48 export CLI_CP=$(find /var/lib/rundeck/cli -name \*.jar -printf %p:) export BOOTSTRAP_CP=$(find /var/lib/rundeck/bootstrap -name \*.jar -printf %p:) export RDECK_JVM="-Djava.security.auth.login.config=/etc/rundeck/jaas-loginmodule.conf \ --- > # build classpath without lone : that includes . > for jar in $(find $RDECK_INSTALL/cli -name '*.jar') ; do > CLI_CP=${CLI_CP:+$CLI_CP:}$jar > done > for jar in $(find $RDECK_INSTALL/bootstrap -name '*.jar') ; do > BOOTSTRAP_CP=${BOOTSTRAP_CP:+$BOOTSTRAP_CP:}$jar > done > > RDECK_JVM="-Djava.security.auth.login.config=$JAAS_CONF \ > -Dloginmodule.name=$LOGIN_MODULE \ > -Drdeck.config=$RDECK_CONFIG \ > -Drundeck.server.configDir=$RDECK_SERVER_CONFIG \ > -Dserver.datastore.path=$RDECK_SERVER_DATA/rundeck \ > -Drundeck.server.serverDir=$RDECK_INSTALL \ > -Drdeck.projects=$RDECK_PROJECTS \ > -Drdeck.runlogs=$RUNDECK_LOGDIR \ > -Drundeck.config.location=$RDECK_CONFIG/rundeck-config.properties \ > -Djava.io.tmpdir=$RUNDECK_TEMPDIR \ > -Drundeck.server.workDir=$RUNDECK_WORKDIR \ > -Dserver.http.port=$RDECK_HTTP_PORT" 40c52 RDECK_JVM="$RDECK_JVM -Xmx3072m -Xms256m -XX:MaxPermSize=256m -server" --- > RDECK_JVM="$RDECK_JVM $RDECK_JVM_SETTINGS" 44,49c56,59 #export RDECK_JVM="$RDECK_JVM -Drundeck.ssl.config=/etc/rundeck/ssl/ssl.properties -Dserver.https.port=${RDECK_HTTPS_PORT}" export RDECK_SSL_OPTS="-Djavax.net.ssl.trustStore=/etc/rundeck/ssl/truststore -Djavax.net.ssl.trustStoreType=jks -Djava.protocol.handler.pkgs=com.sun.net.ssl.internal.www.protocol" #Enable local JMX monitoring #export RDECK_JVM="$RDECK_JVM -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9005 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false" --- > if [ -n "$RUNDECK_WITH_SSL" ] ; then > RDECK_JVM="$RDECK_JVM -Drundeck.ssl.config=$RDECK_SERVER_CONFIG/ssl.properties -Dserver.https.port=${RDECK_HTTPS_PORT}" > RDECK_SSL_OPTS="${RDECK_SSL_OPTS:- -Djavax.net.ssl.trustStore=$RDECK_TRUSTSTORE_FILE -Djavax.net.ssl.trustStoreType=$RDECK_TRUSTSTORE_TYPE -Djava.protocol.handler.pkgs=com.sun.net.ssl.internal.www.protocol}" > fi 51,52c61 if test -t 0 -a -z "$RUNDECK_CLI_TERSE" then --- > if [ -t 0 ] && [ -z "$RUNDECK_CLI_TERSE" ] ; then 57,60c66 if test -n "$JRE_HOME" then unset JRE_HOME fi --- > unset JRE_HOME 62a69,70 > > rundeckd="$JAVA_CMD $RDECK_JVM $RDECK_JVM_OPTS -cp $BOOTSTRAP_CP com.dtolabs.rundeck.RunServer $RDECK_BASE" [rundeck@sys rundeck]$ 后来又翻看github,发现有人也已经发现这个问题了。 https://github.com/rundeck/rundeck/issues/2164 新添加了一个调度一次的按键,同时添加了自定义标题栏配置啥的,我修改了下,不过我的任务有点多,300多个,而且许多任务执行要好几个小时甚至几天,所以暂时不能重启看效果,好像rundeck-config.properties的配置修改后必须重启才能生效,这个有点不方便啊,毕竟线上的东西不可能经常性重启。 (添加了一个run job later )()添加了一个run job laterhttps://github.com/rundeck/rundeck/issues/2164
参考:https://wiki.archlinux.org/index.php/TrackPoint 默认设置97,移动相当的慢,而且手感比较重,调整了 thinkt@linux-pw37:~> sudo vim /etc/udev/rules.d/10-trackpoint.rules 这个文件不存在,创建一个新文件 只需要添加一行配置 thinkt@linux-pw37:~> cat /etc/udev/rules.d/10-trackpoint.rules ACTION=="add", SUBSYSTEM=="input", ATTR{name}=="TPPS/2 IBM TrackPoint", ATTR{device/sensitivity}="250", ATTR{device/press_to_select}="1", ATTR{device/speed}="135" speed是移动速度,sensitivity是灵敏度,可以根据自己的实际体验修改调整到相应的数值。
任务列表不显示任何正在运行的任务,切换到历史任务,显示Got error 28 from storage engine. 补判断为Mysql数据库问题。 最终为mysql临时空间满造成。
不错的入门书Learning ELK stack 第三章: Logstash插件类型分为四类: Input Filter Output Codec Logstash input类的file插件维护了一个sincedb文件追踪监控操作文件的当前位置。默认写入$HOME/.sincedb*,游标及读取频率可以通过配置修改。 file的两个配置属性:sincedb_path,sincedb_write_interval(默认15秒读取一次文件) 这里还涉及另一个配置 start_position => "beginning" 或"end"(默认),使用beginning,如果移除了.sincedb,将重新读取历史数据,造成数据重复。 lumberjack和logstash forwarder(轻量化的logstash)使用lumberjack协议打包日志 redis经常做为logstash forwarder 与logstash的中间人角色,在高负载系统上提供获取日志的服务。 Logstash outpu类型插件: 重点是elasticsearch,email,kafka,lumberjack,redis, Logstash filter类型插件: date,drop,grok解析非结构化的日志转换成结构化,mutate重命名,移除,替换,修改字段,转换字段类型,合并字段等 Logstash codec类型插件:编码解码日志 json,mutliline 第五章节: elasticsearch > indices > documents(json) > fields(_type,mapping) pattern=>"logstash-%{+YYYY.MM.dd}"(默认index) shard:index的物理存储位置,支持Primary shard and replica shard, 默认每个document使用5个shard replica shard分布在各个节点上,可以failover与平均负载 cluster > nodes :三种角色 =>data node =>master node =>routing node & load banlancer node elasticsearch api $curl -X '://://?'d '' VERB:GET, POST, PUT,DELETE, HEAD PROTOCAL:http,https PATH:/index/type/id OPERATION_NAME: _search, _count, and so on QUERY_STRING: ?pretty for pretty print of JSON documents BODY: This makes a request for body text. 查看指定index的document thinkt@linux-pw37:~> curl -XGET 'http://192.168.56.101:9200/logstash-2016.09.09/_search?pretty' { "took" : 107, "timed_out" : false, "_shards" : { "total" : 5, "successful" : 5, "failed" : 0 }, "hits" : { "total" : 1, "max_score" : 1.0, "hits" : [ { "_index" : "logstash-2016.09.09", "_type" : "logs", "_id" : "AVf3tVjYndkMLGPlmtnx", "_score" : 1.0, "_source" : { "message" : "2016-09-09,770.099976,773.244995,759.659973,759.659973,1812200,759.659973", "@version" : "1", "@timestamp" : "2016-09-09T00:00:00.000Z", "path" : "/home/vagrant/table.csv", "host" : "localhost.localdomain", "Date" : "2016-09-09", "Open" : 770.099976, "High" : 773.244995, "Low" : 759.659973, "Close" : 759.659973, "Volume" : 1812200, "Adj_close" : 759.659973 } } ] } } 查看所有index thinkt@linux-pw37:~> curl -XGET 'http://192.168.56.101:9200/_cat/indices?v' health status index pri rep docs.count docs.deleted store.size pri.store.size yellow open logstash-2016.03.18 5 1 1 0 7.9kb 7.9kb yellow open logstash-2016.03.17 5 1 1 0 7.9kb 7.9kb yellow open logstash-2016.03.16 5 1 1 0 7.9kb 7.9kb yellow open logstash-2016.03.15 5 1 1 0 7.9kb 7.9kb yellow open logstash-2016.06.07 5 1 1 0 7.9kb 7.9kb yellow open logstash-2016.06.08 5 1 1 0 7.9kb 7.9kb yellow open logstash-2016.06.09 5 1 1 0 7.9kb 7.9kb yellow open logstash-2016.08.31 5 1 1 0 8kb 8kb yellow open logstash-2016.09.01 5 1 1 0 7.9kb 7.9kb yellow open logstash-2016.08.30 5 1 1 0 8kb 8kb yellow open logstash-2016.03.21 5 1 1 0 7.9kb 7.9kb yellow open logstash-2016.06.14 5 1 1 0 7.9kb 7.9kb yellow open logstash-2016.09.08 5 1 1 0 8kb 8kb yellow open logstash-2016.06.15 5 1 1 0 7.9kb 7.9kb yellow open logstash-2016.09.09 5 1 1 0 7.9kb 7.9kb yellow open logstash-2016.06.16 5 1 1 0 7.9kb 7.9kb yellow open logstash-2016.09.06 5 1 1 0 7.9kb 7.9kb 查看集群中所有nodes thinkt@linux-pw37:~> curl -XGET 'http://192.168.56.101:9200/_cat/nodes?v' host ip heap.percent ram.percent load node.role master name 10.0.2.15 10.0.2.15 13 56 0.00 d * node-1 logstash 的elasticsearch 插件会自动创建index kibana4特性: 搜索关键字高亮 聚合类型:桶,度量 可脚本化的字段 动态展示 kibana的主项: discover,visualize,dashboard,settings discover的search box搜索记录默认最多显示500条index documents time filter支持quick,absolute,relative三种时间过滤方式,并支持定义自动刷新间隔,也可以使用histogram直方图拖动选择时间。 查询语法使用lucene语法 Lucene n. Lucene是一个非常优秀的开源的全文搜索引擎; 我们可以在它的上面开发出各种全文搜索的应用来。Lucene在国外有很高的知名度; 现在已经是Apache的顶级项目; 在国内; 搜索方式:search box 文本查询,field查询 visualize,dashboard 阻止数据丢失 logs > borker(redis,rabbit mq,amqp,zeroMQ) > logstash > elasticsearch > kibana > nginx 加固数据访问 elasticsearch,kibana使用SSL认证访问elasticsearch browser-(ssl_key_file,ssl_cert_file)->kibana->elasticsearch Elasticsearch shield(收费) Search guard(free) 系统伸缩性 horizontally scalable fast,quick,realtime inexpensive便宜的 flexable弹性灵活的 用户量及支持力度 开源的 数据保留策略 elasticsearch curator管理indeices
KDE对多点触控屏的支持没有windows那么优秀,只能使用类似鼠标的单击操作。 下面是直接使用xinput禁用。 thinkt@linux-pw37:/usr/share/X11/xorg.conf.d> xinput ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? Logitech Optical USB Mouse id=10 [slave pointer (2)] ? ? TPPS/2 IBM TrackPoint id=14 [slave pointer (2)] ? ? Melfas LGD AIT Touch Controller id=9 [slave pointer (2)] ? ? SynPS/2 Synaptics TouchPad id=13 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Sleep Button id=8 [slave keyboard (3)] ? Integrated Camera id=11 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=12 [slave keyboard (3)] ? ThinkPad Extra Buttons id=15 [slave keyboard (3)] thinkt@linux-pw37:/usr/share/X11/xorg.conf.d> xinput disable 9 也可以修改/usr/share/X11/xorg.conf.d/10-evdev.conf,添加Option "Ignore" "on" Section "InputClass" Identifier "evdev touchscreen catchall" MatchIsTouchscreen "on" MatchDevicePath "/dev/input/event*" Driver "evdev" Option "Ignore" "on" EndSection
对于经常接AC的用户来说,电池保持在40%-80%之间可以让电池生命周期更长久 由于我的t460s有两块电池,分别设置 linux-pw37:~ # cat /etc/default/tlp # Battery charge thresholds (ThinkPad only, tp-smapi or acpi-call kernel module # required). Charging starts when the remaining capacity falls below the # START_CHARGE_THRESH value and stops when exceeding the STOP_CHARGE_THRESH value. # Main / Internal battery (values in %) START_CHARGE_THRESH_BAT0=40 STOP_CHARGE_THRESH_BAT0=80 # Ultrabay / Slice / Replaceable battery (values in %) START_CHARGE_THRESH_BAT1=40 STOP_CHARGE_THRESH_BAT1=80 重启tlp.service linux-pw37:~ # systemctl restart tlp.service linux-pw37:~ # tlp stat -v -b --- TLP 0.9 -------------------------------------------- +++ ThinkPad Extended Battery Functions tp-smapi = inactive (unsupported hardware) tpacpi-bat = active +++ ThinkPad Battery Status: BAT0 (Main / Internal) /sys/class/power_supply/BAT0/manufacturer = SMP /sys/class/power_supply/BAT0/model_name = 00HW023 /sys/class/power_supply/BAT0/cycle_count = (not supported) /sys/class/power_supply/BAT0/energy_full_design = 23540 [mWh] /sys/class/power_supply/BAT0/energy_full = 24100 [mWh] /sys/class/power_supply/BAT0/energy_now = 24090 [mWh] /sys/class/power_supply/BAT0/power_now = 0 [mW] /sys/class/power_supply/BAT0/status = Unknown (threshold effective) tpacpi-bat.BAT0.startThreshold = 40 [%] tpacpi-bat.BAT0.stopThreshold = 80 [%] tpacpi-bat.BAT0.forceDischarge = 0 Charge = 100.0 [%] Capacity = 102.4 [%] +++ ThinkPad Battery Status: BAT1 (Ultrabay / Slice / Replaceable) /sys/class/power_supply/BAT1/manufacturer = SANYO /sys/class/power_supply/BAT1/model_name = 01AV405 /sys/class/power_supply/BAT1/cycle_count = (not supported) /sys/class/power_supply/BAT1/energy_full_design = 26330 [mWh] /sys/class/power_supply/BAT1/energy_full = 28230 [mWh] /sys/class/power_supply/BAT1/energy_now = 28230 [mWh] /sys/class/power_supply/BAT1/power_now = 0 [mW] /sys/class/power_supply/BAT1/status = Full tpacpi-bat.BAT1.startThreshold = 40 [%] tpacpi-bat.BAT1.stopThreshold = 80 [%] tpacpi-bat.BAT1.forceDischarge = 0 Charge = 100.0 [%] Capacity = 107.2 [%] 下面是tlp官方的介绍 http://linrunner.de/en/tlp/docs/tlp-configuration.html ThinkPad Battery Charge Thresholds ThinkPads only START_CHARGE_THRESH_BAT0=75 STOP_CHARGE_THRESH_BAT0=80 START_CHARGE_THRESH_BAT1=75 STOP_CHARGE_THRESH_BAT1=80 Set ThinkPad battery charge thresholds for main battery (BAT0) and auxiliary/Ultrabay battery (BAT1). Values are given as a percentage of the full capacity. A value of 0 is translated to the hardware defaults 96 / 100%. Charging starts upon connecting AC power, but only if the remaining capacity is below the value of START_CHARGE_TRESH (start threshold). Charging stops when reaching the STOP_CHARGE_TRESH (stop threshold) value. If, however when you connect the AC adapter, charge is above the start threshold, then it will not charge. Note: the charge threshold settings are disabled by default and must be enabled explicitly by removing the leading '#'. ThinkPad T420(s)/T520/W520/X220 (and all newer models): check erratic battery behavior (FAQ). For further questions concerning charge thresholds please visit the TLP FAQ.
thinkt@linux-pw37:~> sudo tlp-stat 。。。 +++ ThinkPad Extended Battery Functions tp-smapi = inactive (kernel module 'tp_smapi' not installed) tpacpi-bat = inactive (kernel module 'acpi_call' not installed) 。。。 thinkt@linux-pw37:~> sudo systemctl enable tlp.service [sudo] root 的密码: Created symlink from /etc/systemd/system/multi-user.target.wants/tlp.service to /usr/lib/systemd/system/tlp.service. thinkt@linux-pw37:~> sudo systemctl enable tlp-sleep.service Created symlink from /etc/systemd/system/sleep.target.wants/tlp-sleep.service to /usr/lib/systemd/system/tlp-sleep.service. thinkt@linux-pw37:~> sudo systemctl mask systemd-rfkill.service Created symlink from /etc/systemd/system/systemd-rfkill.service to /dev/null. 添加源http://download.opensuse.org/repositories/home:/revealed/openSUSE_Tumbleweed/home:revealed.repo linux-pw37:/etc/zypp/repos.d # zypper in dkms-acpi_call 安装完后 +++ ThinkPad Extended Battery Functions tp-smapi = inactive (unsupported hardware) tpacpi-bat = active
lenovo thinkpad t460s升级bios版本,造成opensuse linux 引导丢失修复 原来的eps分区还在的,只是boot manager中的efi引导选项被清除了。 windows下使用easyefi添加回来就可以了 linux下使用efimanager修改添加 linux-pw37:~ # efibootmgr BootCurrent: 0002 Timeout: 2 seconds BootOrder: 001B,0002,0010,0011,0012,0013,0000,0017,0018,0019,001A,001C,0001 Boot0000* Windows Boot Manager Boot0001* opensuse boot manager (这个是我创建的引导) Boot0002* opensuse-secureboot Boot0010 Setup Boot0011 Boot Menu Boot0012 Diagnostic Splash Screen Boot0013 Lenovo Diagnostics Boot0014 Startup Interrupt Menu Boot0015 Rescue and Recovery Boot0016 MEBx Hot Key Boot0017* USB CD Boot0018* USB FDD Boot0019* NVMe0 Boot001A* ATA HDD0 Boot001B* USB HDD Boot001C* PCI LAN Boot001D* IDER BOOT CDROM Boot001E* IDER BOOT Floppy Boot001F* ATA HDD Boot0020* ATAPI CD
系统使用centos7.1 官方下载hadoop,hive,jdk的安装包,解压在新建用户hive的目录下,安装mariadb-server(mysql-server) [hive@localhost ~]$ ll total 544900 drwxrwxr-x 9 hive hive 4096 Sep 10 07:31 apache-hive-2.1.0-bin -rw-rw-r-- 1 hive hive 149599799 Jun 21 01:26 apache-hive-2.1.0-bin.tar.gz drwxrwxr-x 9 hive hive 149 Sep 9 10:45 apache-tomcat-7.0.70 -rw-r--r-- 1 hive hive 8924465 Sep 9 10:29 apache-tomcat-7.0.70.tar.gz drwxr-xr-x 9 root root 139 Aug 18 01:49 hadoop-2.7.3 -rw-r--r-- 1 root root 214092195 Aug 25 19:25 hadoop-2.7.3.tar.gz lrwxrwxrwx 1 hive hive 12 Sep 9 08:28 hadoop-last -> hadoop-2.7.3 lrwxrwxrwx 1 hive hive 21 Sep 9 09:08 hive-last -> apache-hive-2.1.0-bin drwxr-xr-x 8 hive hive 4096 Jun 23 01:56 jdk1.8.0_102 -rw-r--r-- 1 root root 181435897 Sep 9 06:57 jdk-8u102-linux-x64.tar.gz drwxr-xr-x 4 hive hive 143 May 4 11:11 mysql-connector-java-5.1.39 -rw-r--r-- 1 hive hive 3899019 Sep 10 07:18 mysql-connector-java-5.1.39.tar.gz -rw-rw-r-- 1 hive hive 11183 Sep 9 10:18 wc.txt 修改用户的环境 [hive@localhost ~]$ cat .bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs export JAVA_HOME=/home/hive/jdk1.8.0_102 export HADOOP_HOME=/home/hive/hadoop-last export HIVE_HOME=/home/hive/hive-last PATH=$PATH:$HOME/.local/bin:$HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HIVE_HOME/bin:$HIVE_HOME/conf export PATH 启动mysql数据库,为hadoop/hive添加一个用户 [root@localhost ~]# systemctl start mariadb [root@localhost ~]# mysql Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 3 Server version: 5.5.50-MariaDB MariaDB Server Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> CREATE USER 'hadoop'@'localhost' IDENTIFIED BY 'hadoop'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON *.* TO 'hadoop'@'localhost' WITH GRANT OPTION; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> EXIT Bye MariaDB [(none)]> CREATE USER 'hive'@'localhost' IDENTIFIED BY 'hive'; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> GRANT ALL PRIVILEGES ON *.* TO 'hive'@'localhost' WITH GRANT OPTION; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> exit Bye 进入$HIVE_HOME/conf,去掉以下三个文件的后辍.template -rw-r--r-- 1 hive hive 2378 Sep 10 06:24 hive-env.sh -rw-r--r-- 1 hive hive 2299 Jun 3 10:43 hive-exec-log4j2.properties -rw-r--r-- 1 hive hive 2950 Sep 10 06:25 hive-log4j2.properties 重命令hive-default.xml.template为hive-site.xml -rw-r--r-- 1 hive hive 225729 Sep 10 03:21 hive-site.xml 添加hive-env.sh两行 export HADOOP_HOME=/home/hive/hadoop-last export HIVE_CONF_DIR=/home/hive/hive-last/conf 修改hive-site.xml 默认hive.metastore.warehouse.dir是/user/hive/warehouse hive.exec.scratchdir是/tmp/hive 创建这个目录 如果是使用内置的derby存储metadata [hive@localhost hive-last]$ schematool -dbType derby -initSchema which: no hbase in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/hive/.local/bin:/home/hive/bin:/home/hive/jdk1.8.0_102/bin:/home/hive/hadoop-last/bin:/home/hive/hive-last/bin:/home/hive/hive-last/conf) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/hive/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hive/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Metastore connection URL: jdbc:derby:;databaseName=metastore_db;create=true Metastore Connection Driver : org.apache.derby.jdbc.EmbeddedDriver Metastore connection User: APP Starting metastore schema initialization to 2.1.0 Initialization script hive-schema-2.1.0.derby.sql Initialization script completed schemaTool completed 修改 chmod a+rw /tmp/hive/ 下面修改为使用mysql hive-site.xml 480: javax.jdo.option.ConnectionPassword 481- hive 498: javax.jdo.option.ConnectionURL 499- jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true 932: javax.jdo.option.ConnectionDriverName 933- com.mysql.jdbc.Driver 960: javax.jdo.option.ConnectionUserName 961- hive 下载mysql-connector [hive@localhost mysql-connector-java-5.1.39]$ cp mysql-connector-java-5.1.39-bin.jar ~/hive-last/lib/ -v ‘mysql-connector-java-5.1.39-bin.jar’ -> ‘/home/hive/hive-last/lib/mysql-connector-java-5.1.39-bin.jar’ [hive@localhost hive-last]$ schematool -dbType mysql -initSchema which: no hbase in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/hive/.local/bin:/home/hive/bin:/home/hive/jdk1.8.0_102/bin:/home/hive/hadoop-last/bin:/home/hive/hive-last/bin:/home/hive/hive-last/conf) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/hive/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hive/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Metastore connection URL: jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true Metastore Connection Driver : com.mysql.jdbc.Driver Metastore connection User: hive Starting metastore schema initialization to 2.1.0 Initialization script hive-schema-2.1.0.mysql.sql Initialization script completed schemaTool completed 修改hive-site.xml的system变量,解决变量无法识别的问题,使用绝对路径 :%s#${system:java.io.tmpdir}#/tmp/javaiotmp# :%s#${system:user.name}#hive#
Exception in thread "main" java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D at org.apache.hadoop.fs.Path.initialize(Path.java:205) at org.apache.hadoop.fs.Path.(Path.java:171) at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:631) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:550) at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:518) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:705) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: java.net.URISyntaxException: Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D at java.net.URI.checkPath(URI.java:1823) at java.net.URI.(URI.java:745) at org.apache.hadoop.fs.Path.initialize(Path.java:202) ... 12 more hive好像没有识别这些变量 使用vi将system的变量修改成绝对路径 :%s#${system:java.io.tmpdir}#/tmp/javaiotmp# :%s#${system:user.name}#hive# 完美! [hive@localhost hive-last]$ hive which: no hbase in (/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/hive/.local/bin:/home/hive/bin:/home/hive/jdk1.8.0_102/bin:/home/hive/hadoop-last/bin:/home/hive/hive-last/bin:/home/hive/hive-last/conf) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/hive/apache-hive-2.1.0-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hive/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Logging initialized using configuration in file:/home/hive/apache-hive-2.1.0-bin/conf/hive-log4j2.properties Async: true Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. hive> [hive@localhost ~]$ cd - /tmp/javaiotmp [hive@localhost javaiotmp]$ ll total 0 drwxrwxr-x 2 hive hive 6 Sep 10 07:36 a54e81fa-dd9f-4219-a20b-aaa0c879d739_resources drwx------ 2 hive hive 6 Sep 10 07:37 hive [hive@localhost javaiotmp]$
Hadoop、MapReduce、YARN和Spark的区别与联系 转载:http://www.aichengxu.com/view/1103036 2015-03-17 16:37 本站整理 浏览(454) (1) Hadoop 1.0 第一代Hadoop,由分布式存储系统HDFS和分布式计算框架 MapReduce组成,其中,HDFS由一个NameNode和多个DataNode组成,MapReduce由一个JobTracker和多个 TaskTracker组成,对应Hadoop版本为Hadoop 1.x和0.21.X,0.22.x。(2) Hadoop 2.0 第 二代Hadoop,为克服Hadoop 1.0中HDFS和MapReduce存在的各种问题而提出的。针对Hadoop 1.0中的单NameNode制约HDFS的扩展性问题,提出了HDFS Federation,它让多个NameNode分管不同的目录进而实现访问隔离和横向扩展;针对Hadoop 1.0中的MapReduce在扩展性和多框架支持方面的不足,提出了全新的资源管理框架YARN(Yet Another Resource Negotiator),它将JobTracker中的资源管理和作业控制功能分开,分别由组件ResourceManager和 ApplicationMaster实现,其中,ResourceManager负责所有应用程序的资源分配,而ApplicationMaster仅负 责管理一个应用程序。对应Hadoop版本为Hadoop 0.23.x和2.x。(3) MapReduce 1.0或者MRv1(MapReduceversion 1) 第 一代MapReduce计算框架,它由两部分组成:编程模型(programming model)和运行时环境(runtime environment)。它的基本编程模型是将问题抽象成Map和Reduce两个阶段,其中Map阶段将输入数据解析成key/value,迭代调用 map()函数处理后,再以key/value的形式输出到本地目录,而Reduce阶段则将key相同的value进行规约处理,并将最终结果写到 HDFS上。它的运行时环境由两类服务组成:JobTracker和TaskTracker,其中,JobTracker负责资源管理和所有作业的控制, 而TaskTracker负责接收来自JobTracker的命令并执行它。(4)MapReduce 2.0或者MRv2(MapReduce version 2)或者NextGen MapReduc MapReduce 2.0或者MRv2具有与MRv1相同的编程模型,唯一不同的是运行时环境。MRv2是在MRv1基础上经加工之后,运行于资源管理框架YARN之上的 MRv1,它不再由JobTracker和TaskTracker组成,而是变为一个作业控制进程ApplicationMaster,且 ApplicationMaster仅负责一个作业的管理,至于资源的管理,则由YARN完成。 简而言之,MRv1是一个独立的离线计算框架,而MRv2则是运行于YARN之上的MRv1。(5)Hadoop-MapReduce(一个离线计算框架) Hadoop 是google分布式计算框架MapReduce与分布式存储系统GFS的开源实现,由分布式计算框架MapReduce和分布式存储系统 HDFS(Hadoop Distributed File System)组成,具有高容错性,高扩展性和编程接口简单等特点,现已被大部分互联网公司采用。(6)Hadoop-YARN(Hadoop 2.0的一个分支,实际上是一个资源管理系统) YARN是Hadoop的一个子项目(与MapReduce并列),它实际上是一个资源统一管理系统,可以在上面运行各种计算框架(包括MapReduce、Spark、Storm、MPI等)。 当 前Hadoop版本比较混乱,让很多用户不知所措。实际上,当前Hadoop只有两个版本:Hadoop 1.0和Hadoop 2.0,其中,Hadoop 1.0由一个分布式文件系统HDFS和一个离线计算框架MapReduce组成,而Hadoop 2.0则包含一个支持NameNode横向扩展的HDFS,一个资源管理系统YARN和一个运行在YARN上的离线计算框架MapReduce。相比于 Hadoop 1.0,Hadoop 2.0功能更加强大,且具有更好的扩展性、性能,并支持多种计算框架。 Borg/YARN /Mesos/Torca/Corona一类系统可以为公司构建一个内部的生态系统,所有应用程序和服务可以“和平而友好”地运行在该生态系统上。有了这 类系统之后,你不必忧愁使用Hadoop的哪个版本,是Hadoop 0.20.2还是 Hadoop 1.0,你也不必为选择何种计算模型而苦恼,因此各种软件版本,各种计算模型可以一起运行在一台“超级计算机”上了。 从开源角度 看,YARN的提出,从一定程度上弱化了多计算框架的优劣之争。YARN是在Hadoop MapReduce基础上演化而来的,在MapReduce时代,很多人批评MapReduce不适合迭代计算和流失计算,于是出现了Spark和 Storm等计算框架,而这些系统的开发者则在自己的网站上或者论文里与MapReduce对比,鼓吹自己的系统多么先进高效,而出现了YARN之后,则 形势变得明朗:MapReduce只是运行在YARN之上的一类应用程序抽象,Spark和Storm本质上也是,他们只是针对不同类型的应用开发的,没 有优劣之别,各有所长,合并共处,而且,今后所有计算框架的开发,不出意外的话,也应是在YARN之上。这样,一个以YARN为底层资源管理平台,多种计 算框架运行于其上的生态系统诞生了。 目前spark是一个非常流行的内存计算(或者迭代式计算,DAG计算)框架,在MapReduce因效率低下而被广为诟病的今天,spark的出现不禁让大家眼前一亮。 从架构和应用角度上看,spark是 一个仅包含计算逻辑的开发库(尽管它提供个独立运行的master/slave服务,但考虑到稳定后以及与其他类型作业的继承性,通常不会被采用),而不 包含任何资源管理和调度相关的实现,这使得spark可以灵活运行在目前比较主流的资源管理系统上,典型的代表是mesos和yarn,我们称之为 “spark on mesos”和“spark on yarn”。将spark运行在资源管理系统上将带来非常多的收益,包括:与其他计算框架共享集群资源;资源按需分配,进而提高集群资源利用率等。FrameWork On YARN 运行在YARN上的框架,包括MapReduce-On-YARN, Spark-On-YARN, Storm-On-YARN和Tez-On-YARN。 (1)MapReduce-On-YARN:YARN上的离线计算; (2)Spark-On-YARN:YARN上的内存计算; (3)Storm-On-YARN:YARN上的实时/流式计算; (4)Tez-On-YARN:YARN上的DAG计算 参考文献 1 http://blog.csdn.net/gaoyanjie55 2 http://dongxicheng.org/recommend/
转载:: http://www.cnblogs.com/sunss/archive/2010/09/09/1822300.html 提高IO性能(只需要设置 noatime) 相信对性能、优化这些关键字有兴趣的朋友都知道在 Linux 下面挂载文件系统的时候设置 noatime 可以显著提高文件系统的性能。默认情况下,Linux ext2/ext3 文件系统在文件被访问、创建、修改等的时候记录下了文件的一些时间戳,比如:文件创建时间、最近一次修改时间和最近一次访问时间。因为系统运行的时候要访 问大量文件,如果能减少一些动作(比如减少时间戳的记录次数等)将会显著提高磁盘 IO 的效率、提升文件系统的性能。Linux 提供了 noatime 这个参数来禁止记录最近一次访问时间戳。 给文件系统挂载的时候加上 noatime 参数能大幅提高文件系统性能: # vi /etc/fstab /dev/sda1 / ext3 defaults,noatime,errors=remount-ro 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 proc /proc proc defaults 0 0 /dev/sda2 swap swap defaults,noatime 0 0 修改设置后只需要重新挂载文件系统、不需要重启就可以应用新设置: # mount -o remount / # mount /dev/sda1 on / type ext3 (rw,noatime,errors=remount-ro) proc on /proc type proc (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) 网 上很多资料都提到要同时设置 noatime 和 nodiratime,不知道这个结论来自哪里,其实不需要像设置 noatime 那样设置 nodiratime,最可靠的资料应该是源代码,VPSee 查了一下源代码,发现在内核源代码 linux-2.6.33/fs/inode.c 文件里有一个 touch_atime 函数,可以看出如果 inode 的标记位是 NOATIME 的话就直接返回了,根本就走不到 NODIRATIME 那里去,所以只设置 noatime 就可以了,不必再设置 nodiratime. void touch_atime(struct vfsmount *mnt, struct dentry *dentry) 1405{ 1406 struct inode *inode = dentry->d_inode; 1407 struct timespec now; 1408 1409 if (inode->i_flags & S_NOATIME) 1410 return; 1411 if (IS_NOATIME(inode)) 1412 return; 1413 if ((inode->i_sb->s_flags & MS_NODIRATIME) && S_ISDIR(inode->i_mode)) 1414 return; 1415 1416 if (mnt->mnt_flags & MNT_NOATIME) 1417 return; 1418 if ((mnt->mnt_flags & MNT_NODIRATIME) && S_ISDIR(inode->i_mode)) 1419 return; ... 1435}
修改.bashrc 中加入一行 export TERM=xterm 之后退出vim 不再显示内容. t@localhost ~$ vim systemd t@localhost ~$
使用yum 安装完rundeck后服务配置完成. 在浏览器中输入访问网址: youhost:4440, 页面跳转至http://localhost:4440/menu/home # su - rundeck $ vim /etc/rundeck/rundeck-config.properties grails.serverURL=http://localhost:4440 注释掉上面一行 # service rundeckd restart Stopping rundeckd: [ OK ] Starting rundeckd: [ OK ] ok,搞定. 后来发现添加job会有问题,于是把这一行修改成服务器的ip:port,目前运行良好.
t@localhost webapp$ tree . ├── cgi-bin │ ├── athletemodel.py │ ├── generate_list.py │ ├── generate_timing_data.py │ ├── kelly_c.py │ └── yate.py ├── coach.css ├── data │ ├── athletes.pickle │ ├── james.txt │ ├── julie.txt │ ├── mikey.txt │ └── sarah.txt ├── favicon.ico ├── images │ └── coach-head.jpg ├── index.html ├── simple_httpd.py └── templates ├── footer.html └── header.html 4 directories, 17 files 点击(此处)折叠或打开 -rw-r--r--. 1 t t 263 5月 25 10:17 ./simple_httpd.py #!/usr/bin/env python3 # -*- coding:utf-8 -*- from http.server import HTTPServer, CGIHTTPRequestHandler port = 8080 httpd = HTTPServer(('', port), CGIHTTPRequestHandler) print("Starting simple_httpd on port: " + str(httpd.server_port)) httpd.serve_forever() -rwxrwxr-x. 1 t t 672 5月 25 10:34 ./cgi-bin/generate_list.py #!/usr/bin/env python3 # -*- coding:utf-8 -*- #导入M,V import athletemodel, yate #glob 模块可以向操作系统查询一个文件名列表 import glob #生成一个选择运动员列表html页面 data_files = glob.glob('data/*.txt') athletes = athletemodel.put_to_store(data_files) print(yate.start_response()) print(yate.include_header("kelly教练的运动员列表")) print(yate.start_form("generate_timing_data.py")) print(yate.para("从列表中选择一个运动员:")) for each_athlete in athletes: print(yate.radio_button("which_athlete",athletes[each_athlete].name)) print(yate.end_form("Select")) print(yate.include_footer({"Home":"/index.html"})) -rwxrwxr-x. 1 t t 746 5月 25 11:33 ./cgi-bin/generate_timing_data.py #!/usr/bin/env python3 # -*- coding:utf-8 -*- #使用cgi模块处理表单数据 import cgi #cgi跟踪模块 import cgitb cgitb.enable() #将所有表单数据放在一个字典中 form_data = cgi.FieldStorage() athlete_name = form_data['which_athlete'].value import athletemodel,yate #取出pickle数据 athletes = athletemodel.get_from_store() #生成运动员时间显示页面 print(yate.start_response()) print(yate.include_header("时间数据信息")) print(yate.header("运动员:" + athlete_name + ", 出生日期:" + athletes[athlete_name].dob + ".")) print(yate.para("最佳三次成绩为:")) print(yate.u_list(athletes[athlete_name].top3)) print(yate.include_footer({"Home":"/index.html","其他成员数据":"generate_list.py"})) -rwxr-xr-x. 1 t t 1511 5月 25 10:24 ./cgi-bin/yate.py #从string模块中导入类,支持简单的字符串替换模板. from string import Template #生成文件类型 def start_response(resp="text/html"): return('Content-type: ' + resp + ';charset=utf-8\n\n') # def include_header(the_title): with open('templates/header.html') as headf: head_text = headf.read() header = Template(head_text) return(header.substitute(title=the_title)) def include_footer(the_links): with open('templates/footer.html') as footf: foot_text = footf.read() link_string = '' for key in the_links: link_string += '+ the_links[key] + '">' + key + ' ' footer = Template(foot_text) return(footer.substitute(links=link_string)) def start_form(the_url, form_type="POST"): return('+ the_url + '" method="' + form_type + '">') def end_form(submit_msg="Submit"): return(' + submit_msg + '">') def radio_button(rb_name, rb_value): return('+ rb_name + '" value="' + rb_value + '"> ' + rb_value + ' ') def u_list(items): u_string = ' ' for item in items: u_string += ' ' + item + ' ' u_string += ' ' return(u_string) def header(header_text, header_level=2): return('(header_level) + '>' + header_text + ' + str(header_level) + '>') def para(para_text): return(' ' + para_text + ' ') -rwxr-xr-x. 1 t t 2086 5月 25 11:30 ./cgi-bin/athletemodel.py #!/usr/bin/evn python3 # -*- coding:utf8 -*- ''' 1.读取文件 => put_to_store => pickle 2.pickle => get_from_store => viewer ''' import pickle from kelly_c import athletelist #磁盘文件处理 def openfile(filename): try: #打开文件 with open(filename) as athlete_file: #读取数据 data = athlete_file.readline() #初步处理数据,去空,以,号分割 value_list= data.strip().split(',') #分别取出有格式的三种数据 username = value_list.pop(0) userdob = value_list.pop(0) usertimes= value_list #返回实例对象 athlete_instance=athletelist(username,userdob,usertimes) return(athlete_instance) except IOError as ioerr: print('File error %s' % ioerr) return(None) #内容压制,使用字典数据类型. def put_to_store(files_list): #字典生成 all_athletes = {} for each_file in files_list: each_athlete = openfile(each_file) all_athletes[each_athlete.name] = each_athlete #pickle数据压制 try: with open('data/athletes.pickle','wb') as athlfile: pickle.dump(all_athletes,athlfile) except IOError as ioerr: print('File error(%s)' % ioerr) return(all_athletes) def get_from_store(): all_athletes = {} #pickle数据解压 try: with open('data/athletes.pickle','rb') as athlfile: all_athletes=pickle.load(athlfile) except IOError as ioerr: print('File error(%s)' % ioerr) return(all_athletes) #files_list = ["../data/james.txt", "../data/julie.txt", "../data/mikey.txt", "../data/sarah.txt"] #data = put_to_store(files_list) #test ''' print(get_from_store()) print(dir()) type(data) print('Use put_to_store()') for each_athlete in data: print(data[each_athlete].name,data[each_athlete].dob) print('Use get_from_store()') data_copy = get_from_store() for each_athlete in data_copy: print(data_copy[each_athlete].name,data_copy[each_athlete].dob) ''' -rwxrwxr-x. 1 t t 605 5月 25 11:33 ./cgi-bin/kelly_c.py #!/usr/bin/env python3 # -*- coding:utf-8 -*- class athletelist(list): def __init__(self, a_name, a_dob=None, a_times=[]): list.__init__([]) self.name = a_name self.dob = a_dob self.extend(a_times) @property def top3(self): return(sorted(set([sanitize(t) for t in self]))[0:3]) #处理字符,转换成m.s格式 def sanitize(time_string): if '-' in time_string: splitter = '-' elif ':' in time_string: splitter = ':' else: return time_string (min, sec) = time_string.split(splitter) return (min + '.' + sec) -rw-r--r--. 1 t t 84 7月 24 2010 ./data/mikey.txt Mikey McManus,2002-2-24,2:22,3.01,3:01,3.02,3:02,3.02,3:22,2.49,2:38,2:40,2.22,2-31 -rw-r--r--. 1 t t 82 7月 25 2010 ./data/julie.txt Julie Jones,2002-8-17,2.59,2.11,2:11,2:23,3-10,2-23,3:10,3.21,3-21,3.01,3.02,2:59 -rw-r--r--. 1 t t 80 8月 29 2010 ./data/james.txt James Lee,2002-3-14,2-34,3:21,2.34,2.45,3.01,2:01,2:01,3:10,2-22,2-01,2.01,2:16 -rw-r--r--. 1 t t 84 7月 25 2010 ./data/sarah.txt Sarah Sweeney,2002-6-17,2:58,2.58,2:39,2-25,2-55,2:54,2.18,2:55,2:55,2:22,2-21,2.22 t@localhost webapp$ find . -name '*.html' -exec ls -l {} \; -exec cat {} \;
点击(此处)折叠或打开 #!/usr/bin/env python3 # -*- coding:utf-8 -*- import os class athlete: def __init__(self, athlete_name, athlete_dob=None, athlete_times=[]): self.name = athlete_name self.dob = athlete_dob self.times= athlete_times #运动员最好的3组成绩 def top3(self): return(sorted(set([sanitize(time) for time in self.times]))[0:3]) #为运动员添加一个成绩 def add_time(self, time_value): self.times.append(time_value) #为运动员添加一组成绩,使用列表类型. def add_times(self, time_list): self.times.extend(time_list) def openfile(filename): try: #打开文件 with open(filename) as athlete_file: #读取数据 data = athlete_file.readline() value_list= data.strip().split(',') username = value_list.pop(0) userdob = value_list.pop(0) usertimes= value_list #返回实例对象 athlete_instance=athlete(username,userdob,usertimes) return(athlete_instance) except IOError as ioerr: print('File error %s' % ioerr) return(None) #处理字符,转换成m.s格式 def sanitize(time_string): if '-' in time_string: splitter = '-' elif ':' in time_string: splitter = ':' else: return time_string (min, sec) = time_string.split(splitter) return (min + '.' + sec) for name in ["james", "julie", "mikey", "sarah"]: name = openfile(name+'.txt') print(name.name + '的三次最佳成绩是' + str(name.top3())) talen = athlete('talen') talen.add_time('3.25') talen.add_time('3.45') talen.add_times(['1.30','2.59']) print(str(talen.top3())) t@localhost 6$ python3 kelly_c.py James Lee的三次最佳成绩是['2.01', '2.16', '2.22'] Julie Jones的三次最佳成绩是['2.11', '2.23', '2.59'] Mikey McManus的三次最佳成绩是['2.22', '2.31', '2.38'] Sarah Sweeney的三次最佳成绩是['2.18', '2.21', '2.22'] ['1.30', '2.59', '3.25'] 继承list类 点击(此处)折叠或打开 #!/usr/bin/env python3 # -*- coding:utf-8 -*-import osclass athlete: def __init__(self, athlete_name, athlete_dob=None, athlete_times=[]): self.name = athlete_name self.dob = athlete_dob self.times= athlete_times #运动员最好的3组成绩 def top3(self): return(sorted(set([sanitize(time) for time in self.times]))[0:3]) #为运动员添加一个成绩 def add_time(self, time_value): self.times.append(time_value) #为运动员添加一组成绩,使用列表类型. def add_times(self, time_list): self.times.extend(time_list) #使用类继承,继承内置list类class athletelist(list): def __init__(self, a_name, a_dob=None, a_times=[]): list.__init__([]) self.name = a_name self.dob = a_dob self.extend(a_times) def top3(self): return(sorted(set([sanitize(t) for t in self]))[0:3])def openfile(filename): try: #打开文件 with open(filename) as athlete_file: #读取数据 data = athlete_file.readline() value_list= data.strip().split(',') username = value_list.pop(0) userdob = value_list.pop(0) usertimes= value_list #返回实例对象 athlete_instance=athlete(username,userdob,usertimes) return(athlete_instance) except IOError as ioerr: print('File error %s' % ioerr) return(None) #处理字符,转换成m.s格式def sanitize(time_string): if '-' in time_string: splitter = '-' elif ':' in time_string: splitter = ':' else: return time_string (min, sec) = time_string.split(splitter) return (min + '.' + sec)for name in ["james", "julie", "mikey", "sarah"]: name = openfile(name+'.txt') print(name.name + '的三次最佳成绩是' + str(name.top3())) talen = athlete('talen') talen.add_time('3.25') talen.add_time('3.45') talen.add_times(['1.30','2.59'])print(str(talen.top3())) ken = athletelist('ken') #为运动员添加一个成绩 #由于继承list,不需要自己再定义添加方法,直接使用list的方法 ken.append('4.25') #为运动员添加一组成绩,使用列表类型. ken.extend(['4.56','6.20','5.20'])print(ken.top3())
使用函数 点击(此处)折叠或打开 #!/usr/bin/env python3 # -*- coding:utf-8 -*- #函数与处理的数据打包一起. def filetolist(file,listname): try: #打开文件 with open(file) as jaf: #读取数据行 data = jaf.readline() #转换成list listname=data.strip().split(',') data = {} data['name'] = listname.pop(0) data['dob'] = listname.pop(0) data['time'] = listname result = print(data['name'] + '的三次最佳成绩是' + str(sorted(set([sanitize(each_it) for each_it in data['time']]))[0:3])) #return listname return result except IOError as ioerr: print('File error : %s' % ioerr) return(None) #处理字符,转换成m.s格式 def sanitize(time_string): if '-' in time_string: splitter = '-' elif ':' in time_string: splitter = ':' else: return time_string (min, sec) = time_string.split(splitter) return (min + '.' + sec) for name in ["james", "julie", "mikey", "sarah"]: thelist=filetolist(name+".txt",name) #使用列表 #username=name+'user' #userdob =name+'dob' #username = thelist.pop(0) #userdob = thelist.pop(0) ##使用列表推导式 #name2 = [sanitize(each_it) for each_it in thelist] ##使用工厂函数set() #try: # print(username + '的最佳成绩是' + str(sorted(set(name2))[0:3])) #except TypeError as typerr: # print('list type error %s' % typerr) #使用字典 使用类 点击(此处)折叠或打开 #!/usr/bin/env python3 # -*- coding:utf-8 -*- import os class athlete: def __init__(self, athlete_name, athlete_dob=None, athlete_times=[]): self.name = athlete_name self.dob = athlete_dob self.times= athlete_times def top3(self): return(sorted(set([sanitize(time) for time in self.times]))[0:3]) def openfile(filename): try: #打开文件 with open(filename) as athlete_file: #读取数据 data = athlete_file.readline() value_list= data.strip().split(',') username = value_list.pop(0) userdob = value_list.pop(0) usertimes= value_list #返回实例对象 athlete_instance=athlete(username,userdob,usertimes) return(athlete_instance) except IOError as ioerr: print('File error %s' % ioerr) return(None) #处理字符,转换成m.s格式 def sanitize(time_string): if '-' in time_string: splitter = '-' elif ':' in time_string: splitter = ':' else: return time_string (min, sec) = time_string.split(splitter) return (min + '.' + sec) for name in ["james", "julie", "mikey", "sarah"]: name = openfile(name+'.txt') print(name.name + '的三次最佳成绩是' + str(name.top3())) t@localhost 6$ python3 kelly.py James Lee的三次最佳成绩是['2.01', '2.16', '2.22'] Julie Jones的三次最佳成绩是['2.11', '2.23', '2.59'] Mikey McManus的三次最佳成绩是['2.22', '2.31', '2.38'] Sarah Sweeney的三次最佳成绩是['2.18', '2.21', '2.22']
点击(此处)折叠或打开 #!/usr/bin/env python3 # -*- coding:utf-8 -*- def filetolist(file,listname): try: #打开文件 with open(file) as jaf: #读取数据行 data = jaf.readline() #转换成list listname=data.strip().split(',') return listname except IOError as ioerr: print('File error : %s' % ioerr) return(None) #处理字符,转换成m.s格式 def sanitize(time_string): if '-' in time_string: splitter = '-' elif ':' in time_string: splitter = ':' else: return time_string (min, sec) = time_string.split(splitter) return (min + '.' + sec) for name in ["james", "julie", "mikey", "sarah"]: thelist=filetolist(name+".txt",name) cleanname = 'clean' + name cleanname = [] for each_t in thelist: cleanname.append(sanitize(each_t)) print(sorted(cleanname)) #使用列表推导式 print('use list comprehension') cleanname2 = [sanitize(each_it) for each_it in thelist] print(sorted(cleanname2)) #使用for 处理 # sorted_name= sorted(cleanname2) # unique_name = 'unique_' + name # unique_name = [] # for item in sorted_name: # if item not in unique_name: # unique_name.append(item) # print(unique_name[0:3]) #使用工厂函数set() print(sorted(set(cleanname2))[0:3]) t@localhost 5$ python3 kelly.py ['2.01', '2.01', '2.22', '2.34', '2.34', '2.45', '3.01', '3.10', '3.21'] use list comprehension ['2.01', '2.01', '2.22', '2.34', '2.34', '2.45', '3.01', '3.10', '3.21'] ['2.01', '2.22', '2.34'] ['2.11', '2.11', '2.23', '2.23', '2.59', '3.10', '3.10', '3.21', '3.21'] use list comprehension ['2.11', '2.11', '2.23', '2.23', '2.59', '3.10', '3.10', '3.21', '3.21'] ['2.11', '2.23', '2.59'] ['2.22', '2.38', '2.49', '3.01', '3.01', '3.02', '3.02', '3.02', '3.22'] use list comprehension ['2.22', '2.38', '2.49', '3.01', '3.01', '3.02', '3.02', '3.02', '3.22'] ['2.22', '2.38', '2.49'] ['2.18', '2.25', '2.39', '2.54', '2.55', '2.55', '2.55', '2.58', '2.58'] use list comprehension ['2.18', '2.25', '2.39', '2.54', '2.55', '2.55', '2.55', '2.58', '2.58'] ['2.18', '2.25', '2.39']
t@localhost .ssh$ ssh -vT git@github.com OpenSSH_7.2p2, OpenSSL 1.0.2h-fips 3 May 2016 ... sign_and_send_pubkey: signing failed: agent refused operation ... Permission denied (publickey). t@localhost .ssh$ eval "$(ssh-agent -s)" Agent pid 6894 t@localhost .ssh$ ssh -vT git@github.com OpenSSH_7.2p2, OpenSSL 1.0.2h-fips 3 May 2016 Hi talenhao! You've successfully authenticated, but GitHub does not provide shell access. debug1: channel 0: free: client-session, nchannels 1 Transferred: sent 2672, received 1776 bytes, in 0.8 seconds Bytes per second: sent 3418.5, received 2272.2 debug1: Exit status 1 t@localhost .ssh$ 参考: https://help.github.com/articles/error-agent-admitted-failure-to-sign/
本篇快速总结几个Python的常见配置工具,包括setuptools、pip、virtualenv。 setuptools管理Python的第三方包,将包安装到site-package下,安装的包后缀一般为.egg,实际为ZIP格式。默认从http://pypi.python.org/pypi 下载包,能够解决Python包的依赖关系。 安装了setuptools之后即可用 easy_install 命令安装包,有多种安装方式可以选择。 # easy_install PACKAGE # 普通安装 # easy_install /home/yeolar/pkg/PACKAGE.egg # 从本地或网络文件系统中安装 # easy_install http://trac-hacks.org/svn/iniadminplugin/0.11/ # 从指定的下载路径安装 # easy_install http://pypi.python.org/simple/PACKAGE/PACKAGE-0.1.2.4.tar.gz # 从URL源码包安装,条件是PACKAGE-0.1.2.4.tar.gz包中的根目录中必须包括setup.py文件 # easy_install -f http://pypi.python.org/simple/ PACKAGE # 从web上面搜索包,并自动安装 # easy_install PACKAGE==0.1.2.1 # 指定包的版本,如果指定的版本高于现已安装的版本就是升级了 # easy_install -U PACKAGE # 升级到最新版本,不指定版本就会升级到最新版本 # easy_install -U PACKAGE==0.1.2.2 # 升级到指定版本 # easy_install -m PACKAGE # 卸载包,卸载后还要手动删除遗留文件 pip也是一个包管理工具,它和setuptools类似,如果使用virtualenv,会自动安装一个pip。 # pip install PACKAGE # 安装包 # pip -f URL install PACKAGE # 从指定URL下载安装包 # pip -U install PACKAGE # 升级包 virtualenv是一个Python环境配置和切换的工具,可以用它配置多个Python运行环境,和系统中的Python环境隔离,即所谓的沙盒。沙盒的好处包括: 解决库之间的版本依赖,比如同一系统上不同应用依赖同一个库的不同版本。 解决权限限制,比如你没有 root 权限。 尝试新的工具,而不用担心污染系统环境。 $ virtualenv py-for-web 这样就创建了一个名为py-for-web的Python虚拟环境,实际上就是将Python环境克隆了一份。然后可以用 sourcepy-for-web/bin/activate 命令来更新终端配置,修改环境变量。接下来的操作就只对py-for-web环境产生影响了,可以使用 pip 命令在这里安装包,当然也可以直接安装。 $ source py-for-web/bin/activate # 启用虚拟环境 $ deactivate # 退出虚拟环境 转载: http://www.yeolar.com/note/2012/08/18/setuptools-pip-virtualenv/
文件结构 webapp_temlate.py templates/ ├── form.html ├── home.html └── signok.html webapp_temlate.py 点击(此处)折叠或打开 #!/usr/bin/env python3 #-*- coding:utf-8 -*- ''' ''' from flask import Flask from flask import request from flask import render_template app = Flask(__name__) @app.route('/', methods=['GET','POST']) def home(): return render_template('home.html') @app.route('/signin',methods=['GET']) def signin_from(): return render_template('form.html') @app.route('/signin',methods=['POST']) def signin(): username=request.form['username'] password=request.form['password'] if username == 'admin' and password == 'password': return render_template('signok.html',username=username) return render_template('form.html', message='Bad username and password', username=username) if __name__ == '__main__': app.run()
点击(此处)折叠或打开 #!/usr/bin/env python3 #-*- coding:utf-8 -*- ''' ''' from flask import Flask from flask import request app = Flask(__name__) @app.route('/', methods=['GET','POST']) def home(): return ' home! ' @app.route('/signin',methods=['GET']) def signin_from(): return ''' Sign in ''' @app.route('/signin',methods=['POST']) def signin(): if request.form['username'] == 'admin' and request.form['password'] == 'password': return ' hello admin! ' return ' bad username and password. ' if __name__ == '__main__': app.run() t@localhost untitled$ python3 webapp.py * Running on http://127.0.0.1:5000/ 127.0.0.1 - - [10/May/2016 19:49:06] "GET / HTTP/1.1" 200 - 127.0.0.1 - - [10/May/2016 19:49:06] "GET /favicon.ico HTTP/1.1" 404 - 127.0.0.1 - - [10/May/2016 19:49:23] "GET /signin HTTP/1.1" 200 - 127.0.0.1 - - [10/May/2016 19:49:39] "POST /signin HTTP/1.1" 200 - 127.0.0.1 - - [10/May/2016 19:49:52] "GET /signin HTTP/1.1" 200 - 127.0.0.1 - - [10/May/2016 19:49:57] "POST /signin HTTP/1.1" 200 - 直接输入网址 输入http://127.0.0.1:5000/signin 用户名:admin,密码:password 用户名输入错误
html wsgi web server 点击(此处)折叠或打开 #!/usr/bin/env python3 #-*- coding:utf-8 -*- ''' ''' def application(environ,start_response): start_response('200 ok',[('Content-Type', 'text/html')]) body='hello,%s !' % (environ['PATH_INFO'][1:] or 'web') return [body.encode('utf-8')] 点击(此处)折叠或打开 #!/usr/bin/env python3 #-*- coding:utf-8 -*- ''' ''' from wsgiref.simple_server import make_server from wsgi_client import application httpd=make_server('', 10086,application) print('Serving HTTP on port 10086') httpd.serve_forever() t@localhost untitled$ python3 wsgi_server.py Serving HTTP on port 10086 127.0.0.1 - - [10/May/2016 14:28:21] "GET /talen HTTP/1.1" 200 22 127.0.0.1 - - [10/May/2016 14:28:27] "GET /max HTTP/1.1" 200 20 127.0.0.1 - - [10/May/2016 14:29:22] "GET /china HTTP/1.1" 200 22 127.0.0.1 - - [10/May/2016 14:29:22] "GET /favicon.ico HTTP/1.1" 200 28 127.0.0.1 - - [10/May/2016 14:29:34] "GET /US HTTP/1.1" 200 19 127.0.0.1 - - [10/May/2016 14:29:35] "GET /favicon.ico HTTP/1.1" 200 28 t@localhost untitled$ netstat -lntp |grep 10086 (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp 0 0 0.0.0.0:10086 0.0.0.0:* LISTEN 4397/python3 t@localhost untitled$ curl http://localhost:10086/talen hello,talen ! t@localhost untitled$ curl http://localhost:10086/max hello,max !
客户端socket 点击(此处)折叠或打开 #!/usr/bin/env python3 #-*- coding:utf-8 -*- ''' ''' #导入socket网络编程模块 import socket #创建客户端通信对象 client_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #建立客户端与服务器连接,ip+port以数组的方式传入 client_socket.connect(('www.sina.com.cn', 80)) #执行客户端操作 client_socket.send(b'GET / HTTP/1.1\r\nHost: www.sina.com.cn\r\nConnection: close\r\n\r\n') #接收数据 buffer=[] while True: datarecv=client_socket.recv(1024) if datarecv: buffer.append(datarecv) else: break data = b''.join(buffer) #关闭链接 client_socket.close() header, html = data.split(b'\r\n\r\n',1) print(header.decode('utf-8')) #数据写入文件 with open('sina.html', 'wb') as f: f.write(html) t@localhost untitled$ python3 socket_client.py HTTP/1.1 200 OK Content-Type: text/html Vary: Accept-Encoding X-Powered-By: schi_v1.02 Server: nginx Date: Mon, 09 May 2016 08:26:19 GMT Last-Modified: Mon, 09 May 2016 08:24:36 GMT Expires: Mon, 09 May 2016 08:27:19 GMT Cache-Control: max-age=60 Age: 48 Content-Length: 549273 X-Cache: HIT from localhost Connection: close 服务端socket 服务端 点击(此处)折叠或打开 #!/usr/bin/env python3 #-*- coding:utf-8 -*- ''' ''' #导入库 import socket, threading, time #创建socket实例 server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM) #绑定监听网卡 server_socket.bind(('0.0.0.0', 10086)) #监听连接,最大连接数5 server_socket.listen(5) print('Wait for connection...') #定义处理函数 def tcplink(sock, addr): print('sock:',type(sock)) print('addr:',type(addr)) print('Accept new connection from %s:%s' % addr) sock.send(b'Welcome!') while True: data = sock.recv(1024) time.sleep(1) if not data or data.decode(('utf-8')) == 'exit': break sock.send(('Hello, %s!' % data.decode('utf-8')).encode('utf-8')) sock.close() print('Connection from %s:%s closed.' % addr ) #创建永久循环来接受客户端连接 while True: #接受一个新连接 sock, addr = server_socket.accept() #创建一个线程处理请求 t = threading.Thread(target=tcplink, args=(sock, addr)) t.start() 客户端 点击(此处)折叠或打开 #!/usr/bin/env python3 #-*- coding:utf-8 -*- ''' ''' import socket socket_client = socket.socket(socket.AF_INET,socket.SOCK_STREAM) socket_client.connect(('127.0.0.1',10086)) print(socket_client.recv(1024).decode('utf-8')) for data in [b'talen',b'eric',b'tom']: socket_client.send(data) print(socket_client.recv(1024).decode('utf-8')) socket_client.send(b'exit') socket_client.close() print('the connection from %s:%s is closed' % addr) # addr是一个tuple(IP,port),需要两个%s来接受数据。the connection from 127.0.0.1:20481 is closed t@localhost untitled$ python3 socket_client2.py Welcome! Hello, talen! Hello, eric! Hello, tom! 参考学习:http://www.liaoxuefeng.com/wiki/0014316089557264a6b348958f449949df42a6d3a2e542c000/001432004374523e495f640612f4b08975398796939ec3c000
task_master.txttask_worker.txt 点击(此处)折叠或打开 #!/usr/bin/env python3 #-*- coding:utf-8 -*- ''' ''' import time,random,queue from multiprocessing.managers import BaseManager task_queue = queue.Queue() result_queue = queue.Queue() class QueueManager(BaseManager): pass QueueManager.register('get_task_queue', callable=lambda: task_queue) QueueManager.register('get_result_queue', callable=lambda: result_queue) manager = QueueManager(address=('', 5000), authkey=b'talen') manager.start() task = manager.get_task_queue() result = manager.get_result_queue() for i in range(10): n = random.randint(0,9000) print('Put task %d' % n) task.put(n) for i in range(10): r=result.get(timeout=10) print("Result : %s " % r) manager.shutdown() print('master exit.') 点击(此处)折叠或打开 #!/usr/bin/env python3 #-*- coding:utf-8 -*- ''' ''' import time, queue, sys from multiprocessing.managers import BaseManager class QueueManger(BaseManager): pass QueueManger.register('get_task_queue') QueueManger.register('get_result_queue') server_addr='127.0.0.1' print('Connect to server %s '% server_addr) m=QueueManger(address=(server_addr,5000), authkey=b'talen') m.connect() task=m.get_task_queue() result=m.get_result_queue() for i in range(10): try: n=task.get(timeout=1) print('run task %d * %d ...' %(n,n)) r='%d * %d = %d' % (n,n,n*n) time.sleep(1) result.put(r) except Queue.Empty: print('task queue is empty') print('worker exit.') t@localhost untitled$ python3 task_master.py Put task 6811 Put task 5164 Put task 8492 Put task 177 Put task 5496 Put task 8724 Put task 6422 Put task 2887 Put task 287 Put task 876 Result : 6811 * 6811 = 46389721 Result : 5164 * 5164 = 26666896 Result : 8492 * 8492 = 72114064 Result : 177 * 177 = 31329 Result : 5496 * 5496 = 30206016 Result : 8724 * 8724 = 76108176 Result : 6422 * 6422 = 41242084 Result : 2887 * 2887 = 8334769 Result : 287 * 287 = 82369 Result : 876 * 876 = 767376 master exit. t@localhost untitled$ t@localhost untitled$ python3 task_worker.py Connect to server 127.0.0.1 run task 6811 * 6811 ... run task 5164 * 5164 ... run task 8492 * 8492 ... run task 177 * 177 ... run task 5496 * 5496 ... run task 8724 * 8724 ... run task 6422 * 6422 ... run task 2887 * 2887 ... run task 287 * 287 ... run task 876 * 876 ... worker exit. t@localhost untitled$ 参考学习:http://www.liaoxuefeng.com/wiki/0014316089557264a6b348958f449949df42a6d3a2e542c000/001431929340191970154d52b9d484b88a7b343708fcc60000#0
APACHE -VS- NGINX. 2015 EDITION The choice used to be clear: You want convenience – go with Apache If you want speed – then it’s Nginx Or lighttpd. Or whatever, but NOT Apache web server. Sometimes they were even used in conjunction – Nginx on the front, to spoon-feed slower client connections and serve static content (using almost no memory for that), and Apache at the back, to generate dynamic content. Digital ocean has covered the practical considerations of running one or the other (or both) very nicely, so I’m not going to. What I am going to tell you though is that it is outdated. Long gone the days when Nginx had a significant advantage over Apache. If you run a dynamic website, such as WordPress, Apache can now be just as good in terms of speed, so instead of rushing over to Nginx, I’d like to suggest an alternative approach. Disclaimer: this article is mostly focused on websites running on PHP, it is however also relevant to those running dynamic websites using any other apache module, such as mod_python, mod_ruby, mod_perl etc. WHAT WAS WRONG WITH APACHE You know what used to be the main problem with Apache? It’s that it was only possible to serve dynamic websites through Apache modules. For example, to serve a PHP-based website, Apache would use a module calledmod_php (module, that many websites use to this day) and that module used heaps of memory (pun intended). Yet the actual problem was that one httpd process was only able to handle one connection at a time (think of httpd process as a separate program on your server that doesn’t like sharing resources with other programs). So even if you were only serving static files such as css files, javascripts or images, Apache used separate processes to serve them, and all the extra memory. Whereas Nginx used so called events-based architecture that allowed single process to handle hundreds of connections. What you may not know though, is that in version 2.4 (which has been around since 2012), Apache can now use the same method to handle connections, which Nginx used to be famous for. Yes, now a single Apache process can handle tens, hundreds or even thousands of connections. PROOF To prove it, I want to share results from two sets of benchmarks I ran with each web server: Serving static content Serving dynamic content Both benchmarks were done on the same 4-core machine with 7.5G of RAM, configured to best of my knowledge. SERVING STATIC CONTENT Let’s start with static files. For this test, I used the same 150kb jpeg image file and I just kept fetching it at an increasing number of parallel requests using ab. One thing I was interested in was how many such requests each one will handle per second. Here’s a result that says it all: APACHE 10770 req/s @ 512 PARALLEL REQUESTS NGINX 20232 req/s @ 512 PARALLEL REQUESTS I take it you’re laughing out loud right now. Clearly Nginx smashed Apache into pieces. Well laugh no more and here’s why: unless your server is only serving static content, this benchmark is pretty much irrelevant. Let me say that again The results of this bechmark are only interesting if you are using your server for static content mostly rather than generating PHP or some other dynamic content. If you are indeed serving static content only, then by all means go with Nginx, especially if you’re either already running more than one server, or considering running one more because your current static content server can hardly keep up. Otherwise, here’s one thing from this particular benchmark that did interest me: how much memory will the server consume during the test? Here’s the result: SERVER MEMORY USAGE WITH APACHE 15.5% SERVER MEMORY USAGE WITH NGINX 11.8% The reason it is interesting, is that with a typical Apache configuration (e.g. mod_php), such a test would have killed Apache almost instantly. Apache would have used the entire server memory and the server would have slowed down to a crawl. As a safeguard, one would likely have configured it to only serve a hundred or so requests in parallel, but then a lot of the requests would’ve been queued up and, consequentially, they would’ve take even more time to complete. To summarize the results of this first test, when configured properly, Apache is now capable of handling an impressive amount of concurrent requests with very low memory footprint and in that regard, it is an acceptable choice even if your server is only serving static files. SERVING DYNAMIC CONTENT This is where things get really exciting. If you are considering Apache or Nginx for your dynamic website, be it WordPress, Joomla, Drupal or any other 3rd party or in-house web app, what your server will be doing most of the time is running code. In fact, the amount of time spent serving static files will be disproportionally low compared to dynamic content, therefore it’s way more important to see how well can Apache handle dynamic content and how that compares to Nginx. Drumroll, please! APACHE 108 req/s @ 16 PARALLEL REQUESTS NGINX 108 req/s @ 16 PARALLEL REQUESTS Exactly! They’re the same. And the reason results are the same is that both Apache and Nginx are pretty much sleeping during this test. All they do is pass the request on to php-fpm (we’ll talk about it more later), wait for a response and then send it back to the user. And while they wait, they can keep serving static files without any need to launch extra processes for that. In terms of memory, both servers have used the same amount of memory too. So, when it comes to dynamic websites, Apache is now just as good an alternative as Nginx or any other events-based web server. And the reason is exactly that – Apache can now use events-based approach too, as long as you configure it to do so. CONFIGURING APACHE PROPERLY I mentioned in the beginning that Apache can be configured to use events instead of processes since version 2.4. Truth is, it has been available since 2.2, except that in version 2.2 it was considered experimental and in order to use this new approach, you would have to rebuild the server. Let’s not go down that path and switch to 2.4 instead (if you have not done so yet). Before we configure Apache though, we must first set up php-fcgi. We’ll use php-fpm (which stands for PHP FastCGI Process Manager) that will act as a server that we’ll be forward requests to for PHP processing. PHP-FPM Depending on the OS and repository you are using, the actual command to get php-fpm installed will be different. I will assume CentOS/RHEL 6 but feel free to adapt it to your distribution of choice. On CentOS you would install it with: 1 # yum install php-fpm There are several configuration files involved (could vary between different distributions): 1 2 /etc/php-fpm.conf - main configuration file /etc/php-fpm.d/*.conf - additional config files We’ll use a stock configuration file /etc/php-fpm.d/www.conf and we’ll make a few small changes: 1 2 3 4 ; listen = 127.0.0.1:9000 listen = /var/run/php.socket listen.owner = apache ; or whatever user is apache running under listen.group = apache ; likewise Also it’s good to know what the main php-fpm controls are: 1 2 3 4 5 6 7 pm = dynamic - leave it there so more child processes are - created when needed pm.max_children - max number of php processors to run pm.max_spare_servers - how many to keep when the workload drops down pm.max_requests - how many user requests to handle before - recycling. Don't leave it at 0 (default), - set to 1000 if not sure. Once you’re done with the changes, start it and configure to start during server startup: 1 2 # chkconfig php-fpm on # service php-fpm start APACHE First, check the version of Apache httpd server you are using (httpd -V should do it) and if you’re still on 2.2, let’s set up 2.4. One way to do it on CentOS 6 is through “CentOS Software Collections”: 1 2 # yum install centos-release-SCL # yum install httpd24-httpd.x86_64 It will install Apache 2.4 with all of the files and configuration under /opt/rh/httpd24/root/etc/httpd/. I’ll leave all of the standard configuration options for you to handle (there’s usually not much to change there anyways), but here’s what you want to do to start using events-based architecture: 1 2 3 4 # cd /opt/rh/httpd24/root/etc/httpd/conf.modules.d # nano 00-mpm.conf #LoadModule mpm_prefork_module modules/mod_mpm_prefork.so LoadModule mpm_event_module modules/mod_mpm_event.so Now create a new file, say, 05-php.conf to configure Apache to php-fpm interface: 1 2 3 4 5 6 7 8 9 10 11 12 13 # nano 05-php.conf IfModule proxy_fcgi_module> Proxy "unix:/var/run/php.socket|fcgi://php-fpm" timeout=300> /Proxy> /IfModule> Directory "/path/to/web/root"> IfModule proxy_fcgi_module> FilesMatch \.php$> SetHandler "proxy:fcgi://php-fpm/" /FilesMatch> /IfModule> /Directory> And start the server: 1 # service httpd24-httpd start If you’re upgrading from 2.2, it’s probably a good idea to first start this new Apache on an alternative port, say 81 (that’s maybe as simple as changing Listen directive) and test it before you commit to the change. Once you’re good with it though, stop the old apache and reconfigure the startup options: 1 2 3 4 # service httpd stop # chkconfig httpd off # service httpd24-httpd start # chkconfig httpd24-httpd on SUMMARY Apache httpd is a great web server and the new mpm_event module takes it to entirely new heights. Nginx can still outdo Apache in some edge cases (i.e. serving static content only), but when it comes to dynamic websites, which most of the web 2.0 is built on, Apache is now just as good of a choice as Nginx. And if you are already running mpm_event based configuration, I’d recommend to focus your optimization efforts elsewhere rather than looking at Nginx as an opportunity. 转载http://www.speedemy.com/apache-vs-nginx-2015/ I wrote a little while ago about how, for running PHP, Nginx was not faster than Apache. At first I figured that it would be and then it turned out not to be, though only by a bit. But since Apache also has an event-based MPM I wanted to see if the opposite results were true; that if Apache were using its event MPM it would be about the same as Nginx. I had heard that Apache 2.2’s event MPM wasn’t great (it was experimental) but that 2.4 was better, possibly even faster, than Nginx. So I had a few spare moments this Friday and figured I would try it out. I basically ran ab at concurrency levels of 1, 10, 25, 50, 100 and 1000. Like before the results surprised me. The first run with Nginx was impressive. It peaked at 14,000 requests per second. Given my wimpy VM that I ran it on, those numbers are pretty good. What surprised me was that Apache was only half that. I will say for the record that I do not know how to tune the event MPM. But I don’t really have to tune Nginx to get 14k requests per second so I was expecting a little better from Apache. So I pulled out all of the LoadModule statements I could but still have a functional implementation of Apache. While the numbers were 25% better or so they were still well shy of what Nginx was capable of. Then I added the prefork MPM to provide a baseline. Again, I was surprised. The event MPM was faster than the prefork MPM for static content, but not by much. So it seems that if you are serving static content Nginx is still your best bet. If you are serving static content from a CDN or have a load balancer in front of Apache which is running PHP then the prefork MPM is the way to go. While the event MPM will help with concurrency it will not help you speed up PHP and so is not really needed. 转载:http://www.eschrade.com/page/performance-of-apache-2-4-with-the-event-mpm-compared-to-nginx/ Apache 2.4 vs Nginx Benchmark Showdown Disclaimer: This test was highly unscientific, with a +/- 10% fabricated margin of error. No maths were performed to gather proper averages and statistics. No effort was made to ensure consistency of the two Apache builds and no legitimate effort was made with regard to proper scientific rigor during this test. This test does not take into account memory usage, responsiveness of the server under load, or any other relevant metric that would be of more use than this test. I highly encourage you to do your own testing and draw your own conclusions. How the tests were performed: I wanted to simulate a VPS environment similar to a basic Linode. A base install of Ubuntu Server 10.04 was installed in VMWare with an older/slower 7200RPM sata drive for storage. This drive was not in use by any other system during the test, nor were any other VMs active on the host. The guest was given 512MB of RAM and 1 CPU core of the host's 8 cores. The host's CPUs are dual Xeon X5365 @ 3Ghz. Apache 2.2 was installed along with nginx 0.7.65, later Apache 2.4 was compiled on this same system. Testing was performed using Apache JMeter on an 8 core Xeon workstation running Windows 7 and 32GB of RAM with a 1Gb/s link to the VMWare host. Requests per second were determined by rounding off the throughput displayed in the Summary Report listener. Each test was run until the requests per second stabilized. The Apache 2.2 server was only tested with the Prefork MPM, while the Apache 2.4 server was tested with both Prefork and Event. Apache's KeepAlive setting was on throughout the testing and set at 2 seconds.Update: I received several requests to post memory usage statistics so I've updated the Jquery test with memory results. Again, care was not taken to keep the Apache builds consistent. It concerns me that the Prefork build of Apache 2.4 was using so much memory compared to the other Apache builds. Take these results with a grain of salt, but trust that Nginx definitely uses significantly less memory than Apache.Update 2 - Nginx 1.0.12: I received some flak for using an older version of Nginx, so I tested with Nginx 1.0.12 and it was around 4% slower than the results shown here.Test 1 - 21KB text file HTTP Server Req/s Apache 2.2 Prefork 2220 Apache 2.4 Prefork 2250 Apache 2.4 Event 2300 Nginx 2600 Test 2 - 2B text file consisting of a single period. HTTP Server Req/s Apache 2.2 Prefork 4400 Apache 2.4 Prefork 4700 Apache 2.4 Event 4810 Nginx 6650 Test 3 - jquery.min.js (92KB) HTTP Server Req/s Memory Usage Apache 2.2 Prefork 650 12MB Apache 2.4 Prefork 770 72MB Apache 2.4 Event 820 20MB Nginx 1000 2MB Test 4 - PHP output of phpinfo() HTTP Server Req/s Apache 2.2 Prefork 525 Apache 2.4 Prefork 575 Nginx FastCGI 450 转载http://mondotech.blogspot.jp/2012/02/apache-24-vs-nginx-benchmark-showdown.html
服务器提供服务的方式 网络服务器由于要同时为多个客户提供服务,就必须使用某种方式来支持这种多任务的服务方式。一般情况下可以有三种方式来选择,多进程方式、多线程方式及异步方式。其中,多进程方式中服务器对一个客户要使用一个进程来提供服务,由于在操作系统中,生成一个进程需要进程内存复制等额外的开销,这样在客户较多时的性能就会降低。为了克服这种生成进程的额外开销,可以使用多线程方式或异步方式。在多线程方式中,使用进程中的多个线程提供服务,由于线程的开销较小,性能就会提高。事实上,不需要任何额外开销的方式还是异步方式,它使用非阻塞的方式与每个客户通信,服务器使用一个进程进行轮询就行了。 虽然异步方式最为高效,但它也有自己的缺点。因为异步方式下,多个任务之间的调度是由服务器程序自身来完成的,而且一旦一个地方出现问题则整个服务器就会出现问题。因此,向这种服务器增加功能,一方面要遵从该服务器自身特定的任务调度方式,另一方面要确保代码中没有错误存在,这就限制了服务器的功能,使得异步方式的Web 服务器的效率最高,但功能简单。例如Unix 平台上的thttpd 就是这样的一种服务器,然而由于它提供的功能少,只能是满足少部分人的需要。即便如此,thttpd 每隔一段时间还会出现一些问题,幸运的是,它出问题时从不是进入死循环,而是被操作系统杀死,这样就可以使用一个shell 循环立即重启动thttpd,从而基本不影响Web 服务。 由于多线程方式使用线程进行任务调度,这样服务器的开发由于遵从标准,从而变得简单并有利于多人协作。然而多个线程位于同一个进程内,可以访问同样的内存空间,因此存在线程之间的影响,并且申请的内存必须确保申请和释放。对于服务器系统来讲,由于它要数天、数月甚至数年连续不停的运转,一点点错误就会逐渐积累而最终导致影响服务器的正常运转,因此很难编写一个高稳定性的多线程服务器程序。微软的IIS 就是使用的多线程方式,由于微软聚集了相当多优秀程序员,所以IIS 基本上还是值得信赖的,当然我也遇到过很多系统管理员,他们根据经验定期启动所管理的NT 服务器,以预防不可预料的Web 服务停止现象。 多进程方式的优势就在于稳定性,因为一个进程退出的时候,操作系统会回收其占用的资源,从而使它不会留下任何垃圾。即便程序中出现错误,由于进程是相互隔离的,那么这个错误不会积累起来,而是随着这个进程的退出而得到清除。 预生成进程方式的性能 由于Apache 是采用的多进程方式提供服务,为了提高性能,Apache 采用了一种特别的方式,即预生成进程模型。分析多进程方式比其他两种方式开销大的主要原因,是对每一次客户请求,都要生成一个子进程以便进行处理,因此为了避免这种开销,可以使用预先生成的进程来提供服务,并且每个进程在提供一次服务之后也不会立即退出,而是仍然保留在系统中,等待下一次请求。 这里就可以看出,在理想情况下,预先生成的多个进程可以全速回应相应数量的浏览器客户请求,而没有额外的性能开销,因此就完全可以和线程或异步方式相媲美。然而在实际运行当中,由于预先生成的进程毕竟要占用系统资源,如系统内存和CPU 处理能力, 这样如果预先生成的进程超过需要,性能反而会降低。因此Apache 就采用了这样的一种策略,在系统中保持一定的空闲进程,当空闲进程较少时就自动生成,当空闲进程较多时就让一些进程退出。 由于Apache 采用这样的预生成进程模型,就导致预先要生成多少进程、保留多少空闲进程、一个进程提供多少次服务等等成为与性能密切相关的问题,然而,这些设置都是与具体条件密切相关的。例如,越多的进程需要占用越多的内存,所分得的CPU 处理时间就越少,因此系统的物理内存和CPU 处理能力就决定了进程的最大数量。而Apache 提供的基本配置是为了适应大多数情况,在客户请求较少时也不占用过多资源,因此并不是最高性能的设置。而大多数Web 服务器测试的条件下,服务器的内存、CPU 处理能力都不是问题,甚至内存大到足以将所有要访问的文档都可以放在系统缓冲中,因而无须考虑磁盘处理能力,这种情况和实际应用完全不同。因此,SGI 的一位开发者通过调整设置,并使用他自己对Apache 代码的一些改动,在同一个SGI Origin 200 服务器上使用SPECweb96 进行测试,调整后的服务器可以比原始设置提高10 倍的速度,当然这是针对SPECweb96 这个测试程序进行的调整,在实际使用中不可能会有这样巨大的差别。这至少从侧面说明了测试结果并不是绝对的。 此外,Apache 的另一个特点是它的功能特别丰富,而每种功能通常就需要进行特别的处理,这会影响Apache 的性能,然而对于具体的情况,却不是每种特性都是必要的,因此可以通过减少这些功能来增加性能。此外,操作系统的调整对于增强Apache 的性能也是非常重要的。如何根据服务器的实际情况调整操作系统以及Apache 的参数,这些内容在Apache 的文档中都有非常详细的描述。这些文档包含在每个Apache 安装文件中,也可以直接从它的主页得到。 Apache 2.0 展望 虽然Apache 服务器使用预生成进程的方式提高了服务器的性能,然而,程方式本身的不足仍然存在,随着访问数量的增多,进程方式比其他两种方式需要消耗更多的内存和CPU 处理能力,这就限制了单台计算机提供更大负载的能力。如果在低端计算机上想服务更多的请求的话,使用异步方式的thttpd 更为适合。例如,一台512MB 内存双CPU的Linux 服务器提供1000 个并发访问时,其负载就会变得相当高,常常会由于内存用光而无法运行程序,这种情况是由于Linux 重视物理内存,不重视交换空间的原因,如果使用同样配置的FreeBSD 作操作系统,情况会有所改变,然而此时由于需要进行内存交换,就无法达到最优性能了。 因此,这些情况下为了支持更改的负载,完全采用进程方式就不太合适了,而应该利用线程节约资源的优点。 然而,在即将到来的Apache 2.0 中,一切都会变得更完美,Apache 2.0 将充分考虑到进程带来的稳定性特征,以及线程带来高效率的特点。它会预生成多个进程,而每个进程中使用多个线程提供Web 服务。由于存在多个进程,即使一个进程死了也不会影响整个Web 服务。对于不支持进程的操作系统,如Windows,那么就使用多个线程提供服务,反之也是一样。然而,只有同时支持线程和进程的操作系统,才能充分利用Apache 2.0带来的稳定性和高负载能力。 事实上当前的Apache 并不是与线程无关,Windows 版本的Apache 是使用线程的,但按照Apache 文档的说法,Windows 版本的Apache 性能并不好,主要原因是它在移植过程中是使用的Windows 的POSIX 子系统,而Windows 本身的有些特性效率更高。而在Apache2.0 中,使用了APR(Apache Portable Run-time)特性,这种特性对不同的操作系统提供了一个抽象层,以便Apache 能利用Windows 的一些非POSIX 的特性。 Apache2.0在性能上的改善最吸引人.在支持POSIX线程的Unix系统上,Apache可以通过不同的MPM运行在一种多进程与多线程相混合的模式下,增强部分配置的可扩充性能.相比于Apache1.3,2.0版本做了大量的优化来提升处理能力和可伸缩性,并且大多数改进在默认状态下即可生效.但是在编译和运行时刻,2.0也有许多可以显著提高性能的选择. MPM(Multi-ProcessingModules,多道处理模块)是Apache2.0中影响性能的最核心特性. 我主要来说一下prefork和worker工作模式。 prefork的工作原理 如果不用“——with-mpm”显式指定某种MPM,prefork就是Unix平台上缺省的MPM.它所采用的预派生子进程方式也是Apache1.3中采用的模式.prefork本身并没有使用到线程,2.0版使用它是为了与1.3版保持兼容性;另一方面,prefork用单独的子进程来处理不同的请求,进程之间是彼此独立的,这也使其成为最稳定的MPM之一. prefork的工作原理是,控制进程在最初建立“StartServers”个子进程后,为了满足MinSpareServers设置的需要创建一个进程,等待一秒钟,继续创建两个,再等待一秒钟,继续创建四个……如此按指数级增加创建的进程数,最多达到每秒32个,直到满足MinSpareServers设置的值为止.这就是预派生(prefork)的由来.这种模式可以不必在请求到来时再产生新的进程,从而减小了系统开销以增加性能. worker的工作原理 相对于prefork,worker是2.0版中全新的支持多线程和多进程混合模型的MPM.由于使用线程来处理,所以可以处理相对海量的请求,而系统资源的开销要小于基于进程的服务器.但是,worker也使用了多进程,每个进程又生成多个线程,以获得基于进程服务器的稳定性.这种MPM的工作方式将是Apache2.0的发展趋势. worker的工作原理是,由主控制进程生成“StartServers”个子进程,每个子进程中包含固定的ThreadsPerChild线程数,各个线程独立地处理请求.同样,为了不在请求到来时再生成线程,MinSpareThreads和MaxSpareThreads设置了最少和最多的空闲线程数;而MaxClients设置了所有子进程中的线程总数.如果现有子进程中的线程总数不能满足负载,控制进程将派生新的子进程. Worker模式下所能同时处理的请求总数是由子进程总数乘以ThreadsPerChild值决定的,应该大于等于MaxClients.如果负载很大,现有的子进程数不能满足时,控制进程会派生新的子进程.默认最大的子进程总数是16,加大时也需要显式声明ServerLimit(最大值是20000) 需要注意的是,如果显式声明了ServerLimit,那么它乘以ThreadsPerChild的值必须大于等于MaxClients,而且MaxClients必须是ThreadsPerChild的整数倍,否则Apache将会自动调节到一个相应值(可能是个非期望值). 采用传统的生成子进程的方式来提供服务的Apache,适合服务比较复杂的情况,但性能没有单进程的服务器高,尤其在高负载的情况下更是如此。 对于重负载的Apache专业服务器,可以简单的将以上SpareServers、StartServers、MaxClients四值设相同。 Squid是单进程的服务器,处理静态页面比Apache提高一个数量级。同样,Windows平台的IIS在静态页面上的处理性能也较Apache高几倍的性能(但不如Apache稳定)。 如有两台Apache服务器,第一个Apache服务器只提供静态内容和代理服务,可将MaxClients设置较大。第二个Apache服务器要提供消耗处理器能力的动态网页服务,要将MaxClients设置较小。 因此,应充分利用Apache的原理特性及工作平台,合理地配置以下参数,在运行时动态调整,以使Apache达到最合理的状态: MaxKeepAliveRequests 100 # 一次连接可以进行的HTTP请求的最大请求次数(比如客户一次连接中请求几十个页面)。 MinSpareServers 5 MaxSpareServers 10 # Apache预先生成多个空余的子进程驻留系统中,用于处理客户请求。两个参数用于设置最小的空余子进程数量及最多的空闲子进程数量。 StartServers 5 # 设置httpd启动时启动的子进程数量。这个参数应设置为前两个值之间的一个数值。小于或大于前两个数值都没有意义。 MaxClients 150 # 服务器支持的最大并发访问的客户数。 # 应根据服务器的物理内存及处理器动态调整。 MaxRequestsPerChild 30 # 每个子进程处理的服务请求次数。超过此值后,子进程副本退出,重新由原始的htttd进程中重新复制一个干净的副本,以提高系统的稳定性。 # 对于静态页面,产生的内存垃圾少,可设置为2000,甚至更高;如服务器载入各种不同的功能模块,产生内存垃圾多,可将此值降低。 # 对于高稳定的系统,如FreeBSD,可设成1000,或更高。 转自:http://blog.tianya.cn/blogger/post_show.asp?BlogID=40003&PostID=4585547
点击(此处)折叠或打开 #!/usr/bin/env python3 #-*- coding:utf-8 -*- ''' ''' #多进程,pool from multiprocessing import Process from multiprocessing import Pool import os import time import random def f(name): print('hello, %s,pid=%s' % (name, os.getpid())) if __name__ == '__main__': print('Parent process %s ' % os.getpid()) p=Process(target=f, args=('talen',)) print('Child process will start.') p.start() p.join() print('Child process end') def long_time_task(name): print('Run task %s (%s)...' % (name,os.getpid())) start=time.time() time.sleep(random.random() * 3) end=time.time() print('Task %s runs %0.2f seconds' % (name,(end - start ))) if __name__ == '__main__': print('Parent process %s ' % os.getpid()) pp=Pool(4) for i in range(6): pp.apply_async(long_time_task,args=(i,)) print('Child process will start.') pp.close() pp.join() print('Child process end') #子进程 import subprocess print('$ nslookup htfchina.blog.chinaunix.net') r = subprocess.call(['nslookup','htfchina.blog.chinaunix.net']) print('Exit code :',r) #输入网址 print('$ nslookup') subp=subprocess.Popen(['nslookup '], shell=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) output, err = subp.communicate(b'set q=mx\nwww.baidu.com\nexit\n') print(output.decode('utf-8')) print('Exit code:',subp.returncode) 点击(此处)折叠或打开 /usr/bin/python3 /home/t/PycharmProjects/untitled/mutliprocessing_t.py Parent process 14001 Child process will start. hello, talen,pid=14002 Child process end Parent process 14001 Child process will start. Run task 0 (14003)... Run task 1 (14004)... Run task 2 (14005)... Run task 3 (14006)... Task 3 runs 0.02 seconds Run task 4 (14006)... Task 0 runs 2.07 seconds Run task 5 (14003)... Task 1 runs 2.46 seconds Task 4 runs 2.58 seconds Task 2 runs 2.97 seconds Task 5 runs 2.33 seconds Child process end $ nslookup htfchina.blog.chinaunix.net Server: 10.10.106.201 Address: 10.10.106.201#53 Non-authoritative answer: Name: htfchina.blog.chinaunix.net Address: 61.55.167.140 Exit code : 0 $ nslookup Server: 10.10.106.201 Address: 10.10.106.201#53 Non-authoritative answer: www.baidu.com canonical name = www.a.shifen.com. Authoritative answers can be found from: a.shifen.com origin = ns1.a.shifen.com mail addr = baidu_dns_master.baidu.com serial = 1605030003 refresh = 5 retry = 5 expire = 86400 minimum = 3600 Exit code: 0 Process finished with exit code 0
项目目录下存在与系统模块名称冲突的os.py文件,删除即可. 注意文件名称命名时不能与系统模块名称相同.
http://genggeng.iteye.com/blog/1290458 staticmethod 基本上和一个全局函数差不多,只不过可以通过类或类的实例对象(python里光说对象总是容易产生混淆, 因为什么都是对象,包括类,而实际上类实例对象才是对应静态语言中所谓对象的东西)来调用而已, 不会隐式地传入任何参数。这个和静态语言中的静态方法比较像。 classmethod 是和一个class相关的方法,可以通过类或类实例调用,并将该class对象(不是class的实例对象)隐式地 当作第一个参数传入。就这种方法可能会比较奇怪一点,不过只要你搞清楚了python里class也是个真实地 存在于内存中的对象,而不是静态语言中只存在于编译期间的类型。 正常的方法就是和一个类的实例对象相关的方法,通过类实例对象进行调用,并将该实例对象隐式地作为第一 个参数传入,这个也和其它语言比较像。 可如下示例: Python代码 #!/usr/bin/python #coding:utf-8 #author: gavingeng #date: 2011-12-03 10:50:01 class Person: def __init__(self): print "init" @staticmethod def sayHello(hello): if not hello: hello='hello' print "i will sya %s" %hello @classmethod def introduce(clazz,hello): clazz.sayHello(hello) print "from introduce method" def hello(self,hello): self.sayHello(hello) print "from hello method" def main(): Person.sayHello("haha") Person.introduce("hello world!") #Person.hello("self.hello") #TypeError: unbound method hello() must be called with Person instance as first argument (got str instance instead) print "*" * 20 p = Person() p.sayHello("haha") p.introduce("hello world!") p.hello("self.hello") if __name__=='__main__': main() output: Shell代码 i will sya haha i will sya hello world! from introduce method ******************** init i will sya haha i will sya hello world! from introduce method i will sya self.hello from hello method
在saltstack原码salt-2015.8.8.2/salt/version.py中 点击(此处)折叠或打开 if __name__ == '__main__': print(__version__) 经常有程序这样写: 点击(此处)折叠或打开 def main(): ...... if __name == "__main__": main(); 顺便学习了一下__name__ 在模块中直接使用,__name__是__main__; 在模块中导入模块,__name__是模块名; 在类中使用,__name__是类名. Modules… Predefined (writable) attributes: __name__ is the module’s name; … Classes… Special attributes: __name__ is the class name; 29.4. __main__ — Top-level script environment '__main__' is the name of the scope in which top-level code executes. A module’s __name__ is set equal to '__main__' when read from standard input, a script, or from an interactive prompt. A module can discover whether or not it is running in the main scope by checking its own __name__, which allows a common idiom for conditionally executing code in a module when it is run as a script or with python -m but not when it is imported: if __name__ == "__main__": # execute only if run as a script main() For a package, the same effect can be achieved by including a __main__.py module, the contents of which will be executed when the module is run with -m. t@localhost python$ cat namemethod.py #!/usr/bin/env python3 def tprint(): print('__name__ is %s' %(__name__)) if __name__ == '__main__': tprint() else: print('import:') tprint() t@localhost python$ ./namemethod.py __name__ is __main__ t@localhost python$ cat test.py #!/usr/bin/env python3 mport namemethod tprint() t@localhost python$ ./test.py import: __name__ is namemethod __name__ is namemethod
t@localhost ~$ pycharm-professional OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=350m; support was removed in 8.0 log4j:WARN No appenders could be found for logger (io.netty.util.internal.logging.InternalLoggerFactory). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Startup Error: Application cannot start in headless mode 搜索源中有个java-1.8.0-openjdk-headless.x86_64 t@localhost ~$ sudo dnf install java-1.8.0-openjdk-headless.x86_64 java-1.8.0-openjdk.x86_64
转载自:http://www.cnblogs.com/harrychinese/p/python_future_package.html?utm_source=tuicool&utm_medium=referral python __future__ package的几个特性 我学习python过程, 和学习其它编程知识一样, 不是先读大部头书系统学习, 而是看博客和直接实践, 慢慢将这些知识点连成线, 再扩展到面. 这个过程缺点和优点都很明显. 缺点是, 有些知识点可能因为一直没有机会碰到, 就一直是盲点, 另外从点到面过程较长. 好在我自学能力很强, 基本碰到的问题都能搞得定. 近期研究github开源项目有几个发现, 代码多带有: 1. from __future__ import absolute_import2. from __future__ import unicode_literals3. 在根package的 __init__.py, 加上版本号和作者等信息,__version__ = '0.0.2' __author__ = 'somebody'4. 如果源码保存为utf-8格式, 文件头加上如下注释, # -*- coding: utf-8 -*- ==============================__future__的absolute_import==============================from __future__ import absolute_import, 字面理解好像是仅仅允许绝对引用, 其实不然, 真实意思是禁用 implicit relative import, 但并不会禁掉 explicit relative import. 举个例子, 目录结构如下, -cake |- __init__.py |- icing.py |- sponge.py -drink |- __init__.py |- water.py 在 sponge.py 引用 icing , 有多种方法: 1. import icing # implicit relative import, py2已强烈不推荐使用, py3已经不可用了 2. from . import icing # explicit relative import, python.org 官方虽不推荐, 但这却是事实标准 3. from cake import icing # absolute import, python 官方推荐. --------------------------使用absolute_import, 常碰到的一个问题--------------------------使用__future__ absolute_import 之后, 常遇到的如下这一问题, 示例: -PackageA |- module1.py |- module2.py |- __init__.py module1.py 的代码示例: from __future__ import absolute_impact from . import module2 #引入同包下的另一个module if __name__=="__main__": print("module2 was imported in module1.") 运行 module1.py 会报错, 报错信息: ValueError: Attempted relative import in non-package. 原因分析: from . import module2 这样的写法是显式相对引用, 这种引用方式只能用于 package 中, 而不能用于主模块中. 因为[主module]的name总是为 __main__, 并没有层次结构, 也就无从谈起相对引用了. 换句话, if __name__=="__main__": 和相对引用是不能并存的. 解决方法: 方法1: 在 module1 中使用绝对引用, 这个最简单了, 但相对引用的好处也没了. 方法2: 使用 python -m 来启动你的 module1.py, 这个也不推荐. 方法3(推荐): 在 module1 中, 加个main()函数, 然后再新建一个 PackageA/entry.py 做为主程序, 在 entry.py 中使用绝对引用来导入 module1 , 并调用 module1.main(), 这一办法虽不完美, 但我觉得是最好的方法了. ============================== unicode_literals ============================== from __future__ import unicode_literals 在python 2.x中, 对于字符串, 默认还不是采用 unicode 编码的, 除非在字符串前加上前缀u. 比如: >>>x='中国' >>>x '\xd6\xd0\xb9\xfa' >>>print(x) 中国 >>> >>>x=u'中国' >>>x u'\u4e2d\u56fd' >>>print(x) 中国 在python3中默认的编码采用了unicode, 并取消了前缀u. 如果代码要兼容python2/3, 就很麻烦了. 通常有如下3种做法, 其中前两个做法都不推荐: 1. 不管是汉字还是英文, 字符串前面统一不加u. 这种处理方式多数情况下没有问题, 比如print输出, 但字符串如果需要做encode/decode, 就很麻烦. 2. 加python版本判断, 如果 sys.version >3 的话, 字符串不加前缀u, 如果是py2, 加上前缀u. 可以想象, 业务逻辑中再加上这样的判断, 代码会变得很难看. 3. 现在有第3种, 即引入unicode_literals, from __future__ import unicode_literals, 这样在py2下, '中国'这样的字符串不用家前缀u, 也是unicode编码. ============================== 引申阅读 ============================== http://blog.ludovf.net/python-str-unicode/ http://blogs.skicelab.com/maurizio/unicode-common-pitfalls.html