国外的大学图书馆也像国内的一样吗?用Python脚本抓取期刊的主题标题!

简介: 国外的大学图书馆也像国内的一样吗?用Python脚本抓取期刊的主题标题!
catalogs = {
#‘catalog name’ : {
‘base_url’ : beginning part of URL from ‘http://’ to before first ‘/’,
‘search_url’ : URL for online catalog search without base URL including ‘/’;
make sure that ‘{0}’ is in the proper place for the query of ISSN,
‘search_title’ : CSS selector for parent element of anchor
containing the journal title on search results in HTML,
‘bib_record’ : CSS selector for record metadata on catalog item’s HTML page,
‘bib_title’ : CSS selector for parent element of anchor containing the journal title,
‘bib_subjects’ : HTML selector for specific table element where text begins with
“Topics”, “Subject” in catalog item’s HTML page in context of bib_record
‘worldcat’ : {
‘base_url’ : “https://www.worldcat.org”,
‘search_url’ : “/search?qt=worldcat_org_all&q={0}”,
‘search_title’ : “.result.details .name”,
‘bib_record’ : “div#bibdata”,
‘bib_title’ : “div#bibdata h1.title”,
‘bib_subjects’ : “th”
},
‘carli_i-share’ : {
‘base_url’ : “https://vufind.carli.illinois.edu”,
‘search_url’ : “/all/vf-sie/Search/Home?lookfor={0}&type=isn&start_over=0&submit=Find&search=new”,
‘search_title’ : “.result .resultitem”,
‘bib_record’ : “.record table.citation”,
‘bib_title’ : “.record h1”,
‘bib_subjects’ : “th”
},
‘mobius’ : {
‘base_url’ : ‘https://searchmobius.org’,
‘search_url’ : “/iii/encore/search/C__S{0}%20__Orightresult__U?lang=eng&suite=cobalt”,
‘search_title’ : “.dpBibTitle .title”,
‘bib_record’ : “table#bibInfoDetails”,
‘bib_title’ : “div#bibTitle”,
‘bib_subjects’ : “td”
}
}
Obtain the right parameters for specific catalog systems
Input: catalog name: ‘worldcat’, ‘carli i-share’, ‘mobius’
Output: dictionary of catalog parameters
def get_catalog_params(catalog_key):
try:
return catalogs[catalog_key]
except:
print(‘Error - unknown catalog %s’ % catalog_key)
Search catalog for item by ISSN
Input: ISSN, catalog parameters
Output: full URL for catalog item
def search_catalog (issn, p = catalogs[‘carli_i-share’]):
title_url = None
catalog url for searching by issn
url = p[‘base_url’] + p[‘search_url’].format(issn)
u = urlopen (url)
try:
html = u.read().decode(‘utf-8’)
finally:
u.close()
try:
soup = BeautifulSoup (html, features=“html.parser”)
title = soup.select(p[‘search_title’])[0]
title_url = title.find(“a”)[‘href’]
except:
print(‘Error - unable to search catalog by ISSN’)
return title_url
return p[‘base_url’] + title_url
Scrape catalog item URL for metadata
Input: full URL, catalog parameters
Output: dictionary of catalog item metadata,
including title and subjects
def scrape_catalog_item(url, p = catalogs[‘carli_i-share’]):
result = {‘title’:None, ‘subjects’:None}
u = urlopen (url)
try:
html = u.read().decode(‘utf-8’)
finally:
u.close()
try:
soup = BeautifulSoup (html, features=“html.parser”)

title

try:
title = soup.select_one(p[‘bib_title’]).contents[0].strip()

save title to result dictionary

result[“title”] = title
except:
print(‘Error - unable to scrape title from url’)

subjects

try:
record = soup.select_one(p[‘bib_record’])
subject = record.find_all(p[‘bib_subjects’], string=re.compile(“(Subjects*|Topics*)”))[0]
subject_header_row = subject.parent
subject_anchors = subject_header_row.find_all(“a”)
subjects = []
for anchor in subject_anchors:
subjects.append(anchor.string.strip())

save subjects to result dictionary

result[“subjects”] = subjects
except:
print(‘Error - unable to scrape subjects from url’)
except:
print(‘Error - unable to scrape url’)
return result


Search for catalog item and process metadata from item’s HTML page
Input: ISSN, catalog paramters
Output: dictionary of values: issn, catalog url, title, subjects
def get_issn_data(issn, p = catalogs[‘carli_i-share’]):
results = {‘issn’:issn, ‘url’:None, ‘title’:None, ‘subjects’:None}
time.sleep(time_delay)
url = search_catalog(issn, params)
results[‘url’] = url
if url: # only parse metadata for valid URL
time.sleep(time_delay)
item_data = scrape_catalog_item(url, params)
results[‘title’] = item_data[‘title’]
if item_data[‘subjects’] is not None:
results[‘subjects’] = ‘,’.join(item_data[‘subjects’]).replace(‘, -’, ’ - ')
return results

main loop to parse all journals

time_delay = 0.5 # time delay in seconds to prevent Denial of Service (DoS)
try:

setup arguments for command line

args = sys.argv[1:]
parser = argparse.ArgumentParser(description=‘Scrape out metadata from online catalogs for an ISSN’)
parser.add_argument(‘catalog’, type=str, choices=(‘worldcat’, ‘carli_i-share’, ‘mobius’), help=‘Catalog name’)
parser.add_argument(‘-b’, ‘–batch’, nargs=1, metavar=(‘Input CSV’), help=‘Run in batch mode - processing multiple ISSNs’)
parser.add_argument(‘-s’, ‘–single’, nargs=1, metavar=(‘ISSN’), help=‘Run for single ISSN’)
args = parser.parse_args()
params = get_catalog_params(args.catalog) # catalog parameters

single ISSN

if args.single is not None:
issn = args.single[0]
r = get_issn_data(issn, params)
print(‘ISSN: {0}\r\nURL: {1}\r\nTitle: {2}\r\nSubjects: {3}’.format(r[‘issn’], r[‘url’], r[‘title’], r[‘subjects’]))

multiple ISSNs

elif args.batch is not None:
input_filename = args.batch[0]
output_filename = ‘batch_output_{0}.csv’.format(args.catalog) # put name of catalog at end of output file
with open(input_filename, mode=‘r’) as csv_input, open(output_filename, mode=‘w’, newline=‘’, encoding=‘utf-8’) as csv_output:
read_in = csv.reader(csv_input, delimiter=‘,’)
write_out = csv.writer(csv_output, delimiter=‘,’, quotechar=‘"’, quoting=csv.QUOTE_MINIMAL)
write_out.writerow([‘ISSN’, ‘URL’, ‘Title’, ‘Subjects’]) # write out headers to output file
total_rows = sum(1 for row in read_in) # read all rows to get total
csv_input.seek(0) # move back to beginning of file
read_in = csv.reader(csv_input, delimiter=‘,’) # reload csv reader object
for row in tqdm(read_in, total=total_rows): # tqdm is progress bar

each row is an ISSN

issn = row[0]
r = get_issn_data(issn, params)
write_out.writerow([r[‘issn’], r[‘url’], r[‘title’], r[‘subjects’]])

文末有福利领取哦~

👉一、Python所有方向的学习路线

Python所有方向的技术点做的整理,形成各个领域的知识点汇总,它的用处就在于,你可以按照上面的知识点去找对应的学习资源,保证自己学得较为全面。

👉二、Python必备开发工具

👉三、Python视频合集

观看零基础学习视频,看视频学习是最快捷也是最有效果的方式,跟着视频中老师的思路,从基础到深入,还是很容易入门的。

👉 四、实战案例

光学理论是没用的,要学会跟着一起敲,要动手实操,才能将自己的所学运用到实际当中去,这时候可以搞点实战案例来学习。(文末领读者福利)

👉五、Python练习题

检查学习结果。

👉六、面试资料

我们学习Python必然是为了找到高薪的工作,下面这些面试题是来自阿里、腾讯、字节等一线互联网大厂最新的面试资料,并且有阿里大佬给出了权威的解答,刷完这一套面试资料相信大家都能找到满意的工作。

👉因篇幅有限,仅展示部分资料,这份完整版的Python全套学习资料已经上传


相关文章
|
25天前
|
Python
办公自动化-Python如何提取Word标题并保存到Excel中?
办公自动化-Python如何提取Word标题并保存到Excel中?
40 2
|
1天前
|
运维 监控 API
自动化运维实践指南:Python脚本优化服务器管理任务
本文探讨了Python在自动化运维中的应用,介绍了使用Python脚本优化服务器管理的四个关键步骤:1) 安装必备库如paramiko、psutil和requests;2) 使用paramiko进行远程命令执行;3) 利用psutil监控系统资源;4) 结合requests自动化软件部署。这些示例展示了Python如何提升运维效率和系统稳定性。
22 8
|
1天前
|
数据采集 存储 数据挖掘
Python网络爬虫实战:抓取并分析网页数据
使用Python的`requests`和`BeautifulSoup`,本文演示了一个简单的网络爬虫,抓取天气网站数据并进行分析。步骤包括发送HTTP请求获取HTML,解析HTML提取温度和湿度信息,以及计算平均温度。注意事项涉及遵守robots.txt、控制请求频率及处理动态内容。此基础爬虫展示了数据自动收集和初步分析的基础流程。【6月更文挑战第14天】
42 9
|
3天前
|
Go Python
go语言调用python脚本
go语言调用python脚本
7 0
|
4天前
|
存储 区块链 Python
怎么把Python脚本打包成可执行程序?
【6月更文挑战第3天】最近根据用户提的需求用python做了一个小工具,但是在给客户使用的时候不能直接发送python文件,毕竟让客户去安装python环境,那就离了大谱了。所以这时候就需要把多个py文件带着运行环境打包成EXE可执行文件。
9 1
|
4天前
|
JSON 数据格式 Python
python3 服务端使用CGI脚本处理POST的Json数据
python3 服务端使用CGI脚本处理POST的Json数据
21 6
|
16天前
|
Python Windows
一步步教你将包含其他文件的 Python 脚本等打包成 EXE
最近我编写了一个Python脚本,该脚本需要依赖两个同级目录下的文件才能正常运行。然而,当我将脚本打包成EXE程序后,必须将这两个文件放在EXE文件的同级目录下才能正常执行。为了简化部署,我希望能将这两个文件一起打包到EXE文件中,这时候该怎么办呢?
|
16天前
|
存储 区块链 Python
怎么把Python脚本打包成可执行程序?
最近根据用户提的需求用python做了一个小工具,但是在给客户使用的时候不能直接发送python文件,毕竟让客户去安装python环境,那就离了大谱了。所以这时候就需要把多个py文件带着运行环境打包成EXE可执行文件。
怎么把Python脚本打包成可执行程序?
|
24天前
|
XML 数据格式 Python
Python自动化脚本编写技巧
本文介绍了Python自动化脚本的编写技巧:选择合适的库(如os, requests, BeautifulSoup)以简化编程,利用循环和条件语句实现流程控制,通过函数和模块提高代码的可读性和可维护性,使用异常处理保证脚本稳定性,以及借助日志记录进行问题追踪。通过这些方法,可以编写出高效、稳定的自动化脚本。
|
28天前
|
Python
Python如何把脚本编译成可执行exe文件_python脚本编译成可执行文件
Python如何把脚本编译成可执行exe文件_python脚本编译成可执行文件