国外的大学图书馆也像国内的一样吗?用Python脚本抓取期刊的主题标题!

简介: 国外的大学图书馆也像国内的一样吗?用Python脚本抓取期刊的主题标题!
catalogs = {
#‘catalog name’ : {
‘base_url’ : beginning part of URL from ‘http://’ to before first ‘/’,
‘search_url’ : URL for online catalog search without base URL including ‘/’;
make sure that ‘{0}’ is in the proper place for the query of ISSN,
‘search_title’ : CSS selector for parent element of anchor
containing the journal title on search results in HTML,
‘bib_record’ : CSS selector for record metadata on catalog item’s HTML page,
‘bib_title’ : CSS selector for parent element of anchor containing the journal title,
‘bib_subjects’ : HTML selector for specific table element where text begins with
“Topics”, “Subject” in catalog item’s HTML page in context of bib_record
‘worldcat’ : {
‘base_url’ : “https://www.worldcat.org”,
‘search_url’ : “/search?qt=worldcat_org_all&q={0}”,
‘search_title’ : “.result.details .name”,
‘bib_record’ : “div#bibdata”,
‘bib_title’ : “div#bibdata h1.title”,
‘bib_subjects’ : “th”
},
‘carli_i-share’ : {
‘base_url’ : “https://vufind.carli.illinois.edu”,
‘search_url’ : “/all/vf-sie/Search/Home?lookfor={0}&type=isn&start_over=0&submit=Find&search=new”,
‘search_title’ : “.result .resultitem”,
‘bib_record’ : “.record table.citation”,
‘bib_title’ : “.record h1”,
‘bib_subjects’ : “th”
},
‘mobius’ : {
‘base_url’ : ‘https://searchmobius.org’,
‘search_url’ : “/iii/encore/search/C__S{0}%20__Orightresult__U?lang=eng&suite=cobalt”,
‘search_title’ : “.dpBibTitle .title”,
‘bib_record’ : “table#bibInfoDetails”,
‘bib_title’ : “div#bibTitle”,
‘bib_subjects’ : “td”
}
}
Obtain the right parameters for specific catalog systems
Input: catalog name: ‘worldcat’, ‘carli i-share’, ‘mobius’
Output: dictionary of catalog parameters
def get_catalog_params(catalog_key):
try:
return catalogs[catalog_key]
except:
print(‘Error - unknown catalog %s’ % catalog_key)
Search catalog for item by ISSN
Input: ISSN, catalog parameters
Output: full URL for catalog item
def search_catalog (issn, p = catalogs[‘carli_i-share’]):
title_url = None
catalog url for searching by issn
url = p[‘base_url’] + p[‘search_url’].format(issn)
u = urlopen (url)
try:
html = u.read().decode(‘utf-8’)
finally:
u.close()
try:
soup = BeautifulSoup (html, features=“html.parser”)
title = soup.select(p[‘search_title’])[0]
title_url = title.find(“a”)[‘href’]
except:
print(‘Error - unable to search catalog by ISSN’)
return title_url
return p[‘base_url’] + title_url
Scrape catalog item URL for metadata
Input: full URL, catalog parameters
Output: dictionary of catalog item metadata,
including title and subjects
def scrape_catalog_item(url, p = catalogs[‘carli_i-share’]):
result = {‘title’:None, ‘subjects’:None}
u = urlopen (url)
try:
html = u.read().decode(‘utf-8’)
finally:
u.close()
try:
soup = BeautifulSoup (html, features=“html.parser”)

title

try:
title = soup.select_one(p[‘bib_title’]).contents[0].strip()

save title to result dictionary

result[“title”] = title
except:
print(‘Error - unable to scrape title from url’)

subjects

try:
record = soup.select_one(p[‘bib_record’])
subject = record.find_all(p[‘bib_subjects’], string=re.compile(“(Subjects*|Topics*)”))[0]
subject_header_row = subject.parent
subject_anchors = subject_header_row.find_all(“a”)
subjects = []
for anchor in subject_anchors:
subjects.append(anchor.string.strip())

save subjects to result dictionary

result[“subjects”] = subjects
except:
print(‘Error - unable to scrape subjects from url’)
except:
print(‘Error - unable to scrape url’)
return result


Search for catalog item and process metadata from item’s HTML page
Input: ISSN, catalog paramters
Output: dictionary of values: issn, catalog url, title, subjects
def get_issn_data(issn, p = catalogs[‘carli_i-share’]):
results = {‘issn’:issn, ‘url’:None, ‘title’:None, ‘subjects’:None}
time.sleep(time_delay)
url = search_catalog(issn, params)
results[‘url’] = url
if url: # only parse metadata for valid URL
time.sleep(time_delay)
item_data = scrape_catalog_item(url, params)
results[‘title’] = item_data[‘title’]
if item_data[‘subjects’] is not None:
results[‘subjects’] = ‘,’.join(item_data[‘subjects’]).replace(‘, -’, ’ - ')
return results

main loop to parse all journals

time_delay = 0.5 # time delay in seconds to prevent Denial of Service (DoS)
try:

setup arguments for command line

args = sys.argv[1:]
parser = argparse.ArgumentParser(description=‘Scrape out metadata from online catalogs for an ISSN’)
parser.add_argument(‘catalog’, type=str, choices=(‘worldcat’, ‘carli_i-share’, ‘mobius’), help=‘Catalog name’)
parser.add_argument(‘-b’, ‘–batch’, nargs=1, metavar=(‘Input CSV’), help=‘Run in batch mode - processing multiple ISSNs’)
parser.add_argument(‘-s’, ‘–single’, nargs=1, metavar=(‘ISSN’), help=‘Run for single ISSN’)
args = parser.parse_args()
params = get_catalog_params(args.catalog) # catalog parameters

single ISSN

if args.single is not None:
issn = args.single[0]
r = get_issn_data(issn, params)
print(‘ISSN: {0}\r\nURL: {1}\r\nTitle: {2}\r\nSubjects: {3}’.format(r[‘issn’], r[‘url’], r[‘title’], r[‘subjects’]))

multiple ISSNs

elif args.batch is not None:
input_filename = args.batch[0]
output_filename = ‘batch_output_{0}.csv’.format(args.catalog) # put name of catalog at end of output file
with open(input_filename, mode=‘r’) as csv_input, open(output_filename, mode=‘w’, newline=‘’, encoding=‘utf-8’) as csv_output:
read_in = csv.reader(csv_input, delimiter=‘,’)
write_out = csv.writer(csv_output, delimiter=‘,’, quotechar=‘"’, quoting=csv.QUOTE_MINIMAL)
write_out.writerow([‘ISSN’, ‘URL’, ‘Title’, ‘Subjects’]) # write out headers to output file
total_rows = sum(1 for row in read_in) # read all rows to get total
csv_input.seek(0) # move back to beginning of file
read_in = csv.reader(csv_input, delimiter=‘,’) # reload csv reader object
for row in tqdm(read_in, total=total_rows): # tqdm is progress bar

each row is an ISSN

issn = row[0]
r = get_issn_data(issn, params)
write_out.writerow([r[‘issn’], r[‘url’], r[‘title’], r[‘subjects’]])

文末有福利领取哦~

👉一、Python所有方向的学习路线

Python所有方向的技术点做的整理,形成各个领域的知识点汇总,它的用处就在于,你可以按照上面的知识点去找对应的学习资源,保证自己学得较为全面。

👉二、Python必备开发工具

👉三、Python视频合集

观看零基础学习视频,看视频学习是最快捷也是最有效果的方式,跟着视频中老师的思路,从基础到深入,还是很容易入门的。

👉 四、实战案例

光学理论是没用的,要学会跟着一起敲,要动手实操,才能将自己的所学运用到实际当中去,这时候可以搞点实战案例来学习。(文末领读者福利)

👉五、Python练习题

检查学习结果。

👉六、面试资料

我们学习Python必然是为了找到高薪的工作,下面这些面试题是来自阿里、腾讯、字节等一线互联网大厂最新的面试资料,并且有阿里大佬给出了权威的解答,刷完这一套面试资料相信大家都能找到满意的工作。

👉因篇幅有限,仅展示部分资料,这份完整版的Python全套学习资料已经上传


相关文章
|
11天前
|
安全 网络安全 文件存储
思科设备巡检命令Python脚本大集合
【10月更文挑战第18天】
39 1
思科设备巡检命令Python脚本大集合
|
2月前
|
Python
用python转移小文件到指定目录并压缩,脚本封装
这篇文章介绍了如何使用Python脚本将大量小文件转移到指定目录,并在达到大约250MB时进行压缩。
37 2
|
2月前
|
移动开发 Python Windows
python编程获取网页标题title的几种方法及效果对比(源代码)
python编程获取网页标题title的几种方法及效果对比(源代码)
|
8天前
|
关系型数据库 MySQL 数据库连接
python脚本:连接数据库,检查直播流是否可用
【10月更文挑战第13天】本脚本使用 `mysql-connector-python` 连接MySQL数据库,检查 `live_streams` 表中每个直播流URL的可用性。通过 `requests` 库发送HTTP请求,输出每个URL的检查结果。需安装 `mysql-connector-python` 和 `requests` 库,并配置数据库连接参数。
104 68
|
11天前
|
数据采集 JSON 数据处理
抓取和分析JSON数据:使用Python构建数据处理管道
在大数据时代,电商网站如亚马逊、京东等成为数据采集的重要来源。本文介绍如何使用Python结合代理IP、多线程等技术,高效、隐秘地抓取并处理电商网站的JSON数据。通过爬虫代理服务,模拟真实用户行为,提升抓取效率和稳定性。示例代码展示了如何抓取亚马逊商品信息并进行解析。
抓取和分析JSON数据:使用Python构建数据处理管道
|
21天前
|
Linux 区块链 Python
Python实用记录(十三):python脚本打包exe文件并运行
这篇文章介绍了如何使用PyInstaller将Python脚本打包成可执行文件(exe),并提供了详细的步骤和注意事项。
44 1
Python实用记录(十三):python脚本打包exe文件并运行
|
8天前
|
数据采集 Python
python爬虫抓取91处理网
本人是个爬虫小萌新,看了网上教程学着做爬虫爬取91处理网www.91chuli.com,如果有什么问题请大佬们反馈,谢谢。
21 4
|
6天前
|
JSON 测试技术 持续交付
自动化测试与脚本编写:Python实践指南
自动化测试与脚本编写:Python实践指南
12 1
|
9天前
|
数据采集 Java Python
如何用Python同时抓取多个网页:深入ThreadPoolExecutor
在信息化时代,实时数据的获取对体育赛事爱好者、数据分析师和投注行业至关重要。本文介绍了如何使用Python的`ThreadPoolExecutor`结合代理IP和请求头设置,高效稳定地抓取五大足球联赛的实时比赛信息。通过多线程并发处理,解决了抓取效率低、请求限制等问题,提供了详细的代码示例和解析方法。
如何用Python同时抓取多个网页:深入ThreadPoolExecutor
|
1月前
|
Python
Python 脚本高级编程:从基础到实践
本文介绍了Python脚本的高级概念与示例,涵盖函数的灵活应用、异常处理技巧、装饰器的使用方法、上下文管理器的实现以及并发与并行编程技术,展示了Python在自动化任务和数据操作中的强大功能。包括复杂函数参数处理、自定义装饰器、上下文管理器及多线程执行示例。
40 5