村官贪污怎么在网上发表和求助

Python网络爬虫(8)
我们让这个爬虫比每个网页中抽取一些数据,然后实现某些事情,这种做法也被称为提取(scraping)。
1 提取数据方法
正则表达式
BeautifulSoup模块(流行)
Lxml(强大)
1.1 正则表达式
下面是用正则表达式提取国家面积数据的例子。
正则表达式文档:
import urllib2
def scrape(html):
area = re.findall('&tr id="places_area__row"&.*?&td\s*class=["\']w2p_fw["\']&(.*?)&/td&', html)[0]
return area
if __name__ == '__main__':
html = urllib2.urlopen('/view/China-47').read()
print scrape(html)
正则表达式容易适应未来网站的变化,但难以构造、可读性差,难于适应布局微小的变化。
1.2 流行的BeautifulSoup模块
安装:pip install beautifulsoup4
有些网页不具备良好的HTML格式,如下面HTML就存在属性两侧引号缺失和标签未闭合问题。
class=country&
&Population
这样提取数据往往不能得到预期结果,但可以Beautiful Soup来处理。
&&& from bs4 import BeautifulSoup
&&& brocken_html='&ul class=country&&Li&Area&li&Population&/ul&'
&&& soup=BeautifulSoup(brocken_html,'html.parser')
&&& fixed_html=soup.prettify()
&&& print fixed_html
&ul class="country"&
Population
&&& ul=soup.find('ul',attrs={'class':'country'})
&&& ul.find('li')
&li&Area&li&Population&/li&&/li&
&&& ul.find_all('li')
[&li&Area&li&Population&/li&&/li&, &li&Population&/li&]
BeautifulSoup官方文档:
下面是用BeautifulSoup提取国家面积数据的例子。
import urllib2
from bs4 import BeautifulSoup
def scrape(html):
soup = BeautifulSoup(html)
tr = soup.find(attrs={'id':'places_area__row'})
td = tr.find(attrs={'class':'w2p_fw'})
area = td.text
return area
if __name__ == '__main__':
html = urllib2.urlopen('/view/United-Kingdom-239').read()
print scrape(html)
虽然BeautifulSoup正则表达式更加复杂,但容易构造和理解,而且无须担心多余空格和标签属性这样布局上的小变化。
1.3 强大的Lxml模块
Lxml是基于libxml2这个XML解析库的Python封装。该模块用C语言编写的,解析速度比Beautiful Soup更快,不过安装过程也更为复杂。最新的安装说明可以参考 。
和Beautiful Soup一样,使用lxml模块的第一步也是将有可能不合法的HTML解析为统一格式。
&&& import lxml.html
&&& broken_html='&ul class=country&&li&Area&li&Population&/ul&'
&&& tree=lxml.html.fromstring(broken_html)
&&& fixed_html=lxml.html.tostring(tree,pretty_print=True)
&&& print fixed_html
&ul class="country"&
&li&Area&/li&
&li&Population&/li&
lxml也可以正确解析属性两侧缺失的引号,并闭合标签。解析完输入内容之后,进入选择元素的步骤,此时lxml有几种不用的方法:
- XPath选择器(类似Beautiful Soup的find()方法)
- CSS选择器(类似jQuery选择器)
这里选用CSS选择器,它更加简洁,也可以用在解析动态内容。
&&& li=tree.cssselect('ul.country & li')[0]
&&& area=li.text_content()
&&& print area
选择所有标签
选择&a&标签
选择所有class="link"的标签
选择class="link"的&a&标签
选择id="home"的&a&标签
选择父元素为&a&标签的所有&span&标签
选择&a&标签内部的所有&span&标签
选择title属性为”Home”的所有&a&标签
a[title=Home]
下面是用CSS选择器提取国家面积数据的例子。
import urllib2
import lxml.html
def scrape(html):
tree = lxml.html.fromstring(html)
td = tree.cssselect('tr#places_area__row & td.w2p_fw')[0]
area = td.text_content()
return area
if __name__ == '__main__':
html = urllib2.urlopen('http://127.0.0.1:8000/places/default/view/China-47').read()
print scrape(html)
W3C已提出CSS3规范,其网址是 。
Lxml已经实现了大部分CSS3属性,其不支持的功能可以参见 。
需要注意的是,lxml在内部实现中,实际上是将CSS选择器转换为等价的XPath选择器。
2 性能对比
import csv
import time
import urllib2
import timeit
from bs4 import BeautifulSoup
import lxml.html
FIELDS = ('area', 'population', 'iso', 'country', 'capital', 'continent', 'tld', 'currency_code', 'currency_name', 'phone', 'postal_code_format', 'postal_code_regex', 'languages', 'neighbours')
def regex_scraper(html):
results = {}
for field in FIELDS:
results[field] = re.search('&tr id="places_{}__row"&.*?&td class="w2p_fw"&(.*?)&/td&'.format(field), html).groups()[0]
return results
def beautiful_soup_scraper(html):
soup = BeautifulSoup(html, 'html.parser')
results = {}
for field in FIELDS:
results[field] = soup.find('table').find('tr', id='places_{}__row'.format(field)).find('td', class_='w2p_fw').text
return results
def lxml_scraper(html):
tree = lxml.html.fromstring(html)
results = {}
for field in FIELDS:
results[field] = tree.cssselect('table & tr#places_{}__row & td.w2p_fw'.format(field))[0].text_content()
return results
def main():
times = {}
html = urllib2.urlopen('http://127.0.0.1:8000/places/default/view/China-47').read()
NUM_ITERATIONS = 1000
for name, scraper in ('Regular expressions', regex_scraper), ('Beautiful Soup', beautiful_soup_scraper), ('Lxml', lxml_scraper):
times[name] = []
start = time.time()
for i in range(NUM_ITERATIONS):
if scraper == regex_scraper:
re.purge()
result = scraper(html)
assert(result['area'] == '9596960 square kilometres')
times[name].append(time.time() - start)
end = time.time()
print '{}: {:.2f} seconds'.format(name, end - start)
writer = csv.writer(open('times.csv', 'w'))
header = sorted(times.keys())
writer.writerow(header)
for row in zip(*[times[scraper] for scraper in header]):
writer.writerow(row)
if __name__ == '__main__':
这段代码每个爬虫执行1000次,每次都有会检查结果是否正确,然后打印用时,并把所有记录存入csv文件中。正则表达式模块会用缓存搜索结果,我们用re.purge()方法清除第次的缓存。
wu_being@ubuntukylin64:~/GitHub/WebScrapingWithPython/2.数据抓取$ python 2performance.py
Regular expressions: 6.65 seconds
Beautiful Soup: 61.61 seconds
Lxml: 8.57 seconds
正则表达式
简单(内置模块)
Beautiful Soup
简单(纯Python)
3 为链接爬虫添加抓取回调
要想把提取数据代码集成到上章链接爬虫代码中,我们需要添加一个回调函数callback,该函数就是调入参数处理用于提取数据行为。本例中,网页下载后调用回调函数,数据提取函数包含url和html两个参数,并返回一个待爬取的URL列表。
def link_crawler(seed_url, link_regex=None,... scrape_callback=None):
html = download(url, headers, proxy=proxy, num_retries=num_retries)
links = []
if scrape_callback:
links.extend(scrape_callback(url, html) or [])
3.1 回调函数一
现在我们只需对传入的scrape_callback函数定制化处理。
import csv
import urlparse
import lxml.html
from link_crawler import link_crawler
FIELDS = ('area', 'population', 'iso', 'country', 'capital', 'continent', 'tld', 'currency_code', 'currency_name', 'phone', 'postal_code_format', 'postal_code_regex', 'languages', 'neighbours')
def scrape_callback(url, html):
if re.search('/view/', url):
tree = lxml.html.fromstring(html)
row = [tree.cssselect('table & tr#places_{}__row & td.w2p_fw'.format(field))[0].text_content() for field in FIELDS]
print url, row
if __name__ == '__main__':
link_crawler('/', '/(index|view)', scrape_callback=scrape_callback)
用第一种回调输出:
wu_being@ubuntukylin64:~/GitHub/WebScrapingWithPython/2.数据抓取$ python 3scrape_callback1.py
Downloading: /
Downloading: /index/1
Downloading: /index/25
Downloading: /view/Zimbabwe-252
/view/Zimbabwe-252 ['390,580 square kilometres', '11,651,858', 'ZW', 'Zimbabwe', 'Harare', 'AF', '.zw', 'ZWL', 'Dollar', '263', '', '', 'en-ZW,sn,nr,nd', 'ZA MZ BW ZM ']
Downloading: /view/Zambia-251
/view/Zambia-251 ['752,614 square kilometres', '13,460,305', 'ZM', 'Zambia', 'Lusaka', 'AF', '.zm', 'ZMW', 'Kwacha', '260', '#####', '^(\\d{5})$', 'en-ZM,bem,loz,lun,lue,ny,toi', 'ZW TZ MZ CD NA MW AO ']
Downloading: /view/Yemen-250
3.2 回调函数二
下面我们对功能进行扩展,把得到的结果数据保存到CSV表格中。这里我们使用了回调类,以便保持csv的writer属性的状态。csv的writer属性在构造方法中进行了实现化处理,然后在call方法中多次写操作。注意,call是一个特殊方法,也是链接接爬虫中scrape_callback的调用方法。也就是说scrape_callback(url,html)和scrape_callback.__call__(url,html)是等价的。可以参考 .。
import csv
import urlparse
import lxml.html
from link_crawler import link_crawler
class ScrapeCallback:
def __init__(self):
self.writer = csv.writer(open('countries.csv', 'w'))
self.fields = ('area', 'population', 'iso', 'country', 'capital', 'continent', 'tld', 'currency_code', 'currency_name', 'phone', 'postal_code_format', 'postal_code_regex', 'languages', 'neighbours')
self.writer.writerow(self.fields)
def __call__(self, url, html):
if re.search('/view/', url):
tree = lxml.html.fromstring(html)
for field in self.fields:
row.append(tree.cssselect('table & tr#places_{}__row & td.w2p_fw'.format(field))[0].text_content())
self.writer.writerow(row)
if __name__ == '__main__':
link_crawler('http://127.0.0.1:8000/places', '/places/default/(index|view)', scrape_callback=ScrapeCallback())
3.3 复用上章的链接爬虫代码
import urlparse
import urllib2
import time
from datetime import datetime
import robotparser
import Queue
def link_crawler(seed_url, link_regex=None, delay=0, max_depth=-1, max_urls=-1, headers=None, user_agent='wswp', proxy=None, num_retries=1, scrape_callback=None):
"""Crawl from the given seed URL following links matched by link_regex
crawl_queue = [seed_url]
seen = {seed_url: 0}
num_urls = 0
rp = get_robots(seed_url)
throttle = Throttle(delay)
headers = headers or {}
if user_agent:
headers['User-agent'] = user_agent
while crawl_queue:
url = crawl_queue.pop()
depth = seen[url]
if rp.can_fetch(user_agent, url):
throttle.wait(url)
html = download(url, headers, proxy=proxy, num_retries=num_retries)
links = []
if scrape_callback:
links.extend(scrape_callback(url, html) or [])
if depth != max_depth:
if link_regex:
links.extend(link for link in get_links(html) if re.match(link_regex, link))
for link in links:
link = normalize(seed_url, link)
if link not in seen:
seen[link] = depth + 1
if same_domain(seed_url, link):
crawl_queue.append(link)
num_urls += 1
if num_urls == max_urls:
print 'Blocked by robots.txt:', url
class Throttle:
"""Throttle downloading by sleeping between requests to same domain
def __init__(self, delay):
self.delay = delay
self.domains = {}
def wait(self, url):
"""Delay if have accessed this domain recently
domain = urlparse.urlsplit(url).netloc
last_accessed = self.domains.get(domain)
if self.delay & 0 and last_accessed is not None:
sleep_secs = self.delay - (datetime.now() - last_accessed).seconds
if sleep_secs & 0:
time.sleep(sleep_secs)
self.domains[domain] = datetime.now()
def download(url, headers, proxy, num_retries, data=None):
print 'Downloading:', url
request = urllib2.Request(url, data, headers)
opener = urllib2.build_opener()
proxy_params = {urlparse.urlparse(url).scheme: proxy}
opener.add_handler(urllib2.ProxyHandler(proxy_params))
response = opener.open(request)
html = response.read()
code = response.code
except urllib2.URLError as e:
print 'Download error:', e.reason
if hasattr(e, 'code'):
code = e.code
if num_retries & 0 and 500 &= code & 600:
html = download(url, headers, proxy, num_retries-1, data)
code = None
return html
def normalize(seed_url, link):
"""Normalize this URL by removing hash and adding domain
link, _ = urlparse.urldefrag(link)
return urlparse.urljoin(seed_url, link)
def same_domain(url1, url2):
"""Return True if both URL's belong to same domain
return urlparse.urlparse(url1).netloc == urlparse.urlparse(url2).netloc
def get_robots(url):
"""Initialize robots parser for this domain
rp = robotparser.RobotFileParser()
rp.set_url(urlparse.urljoin(url, '/robots.txt'))
def get_links(html):
"""Return a list of links from html
webpage_regex = re.compile('&a[^&]+href=["\'](.*?)["\']', re.IGNORECASE)
return webpage_regex.findall(html)
if __name__ == '__main__':
link_crawler('', '/(index|view)', delay=0, num_retries=1, user_agent='BadCrawler')
link_crawler('', '/(index|view)', delay=0, num_retries=1, max_depth=1, user_agent='GoodCrawler')
Wu_Being 博客声明:本人博客欢迎转载,请标明博客原文和原链接!谢谢!
【Python爬虫系列】《【Python爬虫2】网页数据提取》
Python爬虫系列的GitHub代码文件:
如果你看完这篇博文,觉得对你有帮助,并且愿意付赞助费,那么我会更有动力写下去。
&&相关文章推荐
* 以上用户言论只代表其个人观点,不代表CSDN网站的观点或立场
访问:50699次
积分:1121
积分:1121
排名:千里之外
原创:57篇
评论:32条
Name: 吴兵
阅读:8436
阅读:4323
阅读:1680
阅读:9161
(6)(1)(2)(15)(1)(2)(1)(4)(4)(7)(1)(1)(2)(12)(2)(1)完整的MiniSEED解压缩处理,不过输出SAC暂时未实现,只输出一个文本文件。import structimport sys,datetimeimport pylabdef mseed_btime2pydatetime(btime0): year=int(btime0[0]) nDay=int(btime0[1]) y0=year-1 dt0=datetime.datetime(y0,12,31) td=datetime.timedelta(days=nDay) dt1=dt0+td dt9=datetime.datetime(dt1.year,dt1.month,dt1.day,int(btime0[2]),int(btime0[3]),int(btime0[4]),int(btime0[6])*100) return dt9#xx class mseed_data_record(): #only data def __init__(self):
dataDiff=[]
dataVal=[]
dataTime=[]
dataTimeFlag=[]
#dataBtime
#dataOption
#dataFirst class mseed_record_parser: def parse(self,recordBin,infoLevel):
#m_recordBin=recordBin
self.m_InfoLevel=infoLevel
dataDiff=[]
dataVal=[]
dataTime=[]
dataTimeFlag=[]
sFrame=recordBin[:64]
(dataBtime,dataOption,dataFirst)=self._parse_frame0(sFrame)
if self.m_InfoLevel&0:
print '='*4,(dataFirst),&=&*4
if self.m_InfoLevel&1:
print 'btime,samplenum:',dataBtime,dataOption
self.time0=mseed_btime2pydatetime(dataBtime)
#print self.time0.strftime('%Y-%m-%d %H:%M:%S %f')
sFrame=recordBin[64:128]
dataBuff=self._parse_frame1(sFrame)
dataDiff.extend(dataBuff)
for i in range(2,8): #512/64=8
sFrame=recordBin[i*64:(i+1)*64]
(dataBuff,n1)=self._parse_frame(sFrame)
dataDiff.extend(dataBuff)
if self.m_InfoLevel&1:
print &x0,xn:&,x0,xn
print &datacount(read,get):&,dataOption[0],len(dataDiff)
#value from diff
if len(dataDiff)&0 :
dataVal.append(self.x0)
dataTime.append(self.time0)
dataTimeFlag.append('b')
for i in range(1,len(dataDiff)):
v+=dataDiff[i]
dataVal.append(v)
dataTimeFlag.append(&m&)
#hits=100,btime step=10000/hits=100,microseconds step=10000
msc=10000*i
msv=msc%1000000
sv=msc/1000000
td=datetime.timedelta(seconds=sv,microseconds=msv)
dt1=self.time0+td
dataTime.append(dt1)
dataTimeFlag[len(dataDiff)-1]='e'
#must match for data count
if len(dataDiff)!=dataOption[0]:
print &data count from parsing error(fromFile,parse):&,dataOption[0],len(dataDiff)
sys.exit()
return (dataDiff,dataVal,dataTime,dataTimeFlag,True)
def _frame_getW0Ck(self,w0):
w0b=bin(w0)
bn=len(w0b)-2 #remove 0b
ck='0'*(32-bn)+w0b[2:]
for i in range(0,16):
w0ck.append(ck[i*2:(i+1)*2])
return w0ck
def _parse2_Dnib4Ck11(self,val):
h2=(val&&30)&0x3
nv=0; bits=0
if h2==0: #6bit*5
bits=6; nv=5; m1 = 0x0000003f
if h2==1: #5bit*6
bits=5; nv=6; m1 = 0x0000001f
if h2==2: #4bit*7
bits=4; nv=7; m1 = 0x0000000f;
for i in range(0,nv):
vals.append((val&&(bits*i))&m1)
for i in range(len(vals)):
if vals[i]&(2**bits)/2:
vals[i]=vals[i]-(2**bits)
##print valW,bin(valW)
vals.reverse()
##print &##(dnib-11)h2,vals:&,h2,vals
return vals
def _parse2_Dnib4Ck10(self,val):
h2=(val&&30)&0x3
nv=0; bits=0
if h2==1: #30bit*1
bits=30; nv=1; m1 = 0x3fffffff
#print &30bit:&,val,bin(val)
#sys.exit()
if h2==2: #15bit*2
bits=15; nv=2;
m1 = 0x00007fff
if h2==3: #10bit*3
bits=10; nv=3;
m1 = 0x000003
for i in range(0,nv):
vals.append((val&&(bits*i))&m1)
for i in range(len(vals)):
if vals[i]&(2**bits)/2:
vals[i]=vals[i]-(2**bits)
##print valW,bin(valW)
vals.reverse()
##print &##(dnib-10)h2,vals:&,h2,vals
return vals
def _parse2_Byte1x4(self,val):
m1 = 0x000000
for i in range(0,nv):
vals.append((val&&(bits*i))&m1)
for i in range(len(vals)):
if vals[i]&(2**bits)/2:
vals[i]=vals[i]-(2**bits)
##print valW,bin(valW)
vals.reverse()
##print &byte1x4&,vals
return vals
def _parse_package(self,valW,ck):
if ck=='00':
#print &ck=0&
#sys.exit()
if ck=='01': #1byte*4
vals0=self._parse2_Byte1x4(valW)
'''
#ok,ok, same as cpp code
sFmt='&bbbb'
#sFmt_byte1='&bbbb'
valBin=struct.pack('&i',valW)
vals=struct.unpack(sFmt,valBin)
vals0=list(vals)
print valW
print &ck,vals&,ck,vals
'''
if ck=='10':
vals0=self._parse2_Dnib4Ck10(valW)
if ck==&11&:
vals0=self._parse2_Dnib4Ck11(valW)
return vals0
#ok def _parse_frame0(self,sFrame):
lineFlag=sFrame[:8]
lineData=sFrame[8:]
sSta=lineData[:5]
sPos=lineData[5:7]
sChn=lineData[7:10]
sNet=lineData[10:12]
sFmtBTime='&HHBBBBH'
sFmtDataHeader='&HhhBBBBiHH'
hdrDataBtime=lineData[12:22]
hdrDataOption=lineData[22:40]
dataBtime=struct.unpack(sFmtBTime,hdrDataBtime)
dataOption=struct.unpack(sFmtDataHeader,hdrDataOption)
return (dataBtime,dataOption,{'sta':sSta,'pos':sPos,'chn':sChn,'net':sNet,'flag':lineFlag})
def _parse_frame1(self,sFrame):
dataBuff=[]
Wn=struct.unpack('&Iii'+'i'*13,sFrame)
self.x0=Wn[1]
self.xn=Wn[2]
w0ck=self._frame_getW0Ck(w0)
#print 'w0ck:',w0ck
#print 'Wn:',Wn
for i in range(3,16):
vals=self._parse_package(Wn[i],w0ck[i])
if not vals:
dataBuff.extend(vals)
return dataBuff
def _parse_frame(self,sFrame):
dataBuff=[]
Wn=struct.unpack('&I'+'i'*15,sFrame)
#maybe not exist data
w0ck=self._frame_getW0Ck(w0)
#print 'w0',w0ck
#print 'Wn:',Wn
for i in range(1,16):
vals=self._parse_package(Wn[i],w0ck[i])
if not vals:
dataBuff.extend(vals)
return (dataBuff,1)
return (dataBuff,0) class mseed_reader: ''' seed file read/write.
by WangBin. '''
#ok def read(self,sFile):
self.m_mseedfile=sFile
f=open(sFile,'rb')
self.dataBin_all=f.read()
self.g_dataDiff=[]
self.g_dataVal=[]
self.g_dataTime=[]
self.g_dataTimeFlag=[]
self.read_bin_ok=True
return True
def exportSacFile(self,sSacFile,pydatetimeFrom,pydatetimeTo,bBin):
#export to sac file
fsac=open(&d://mseed-sac.txt&,&wt&)
dc=len(self.g_dataTimeFlag)
for i in range(1,dc): #skip first b
if self.g_dataTimeFlag[i]=='b':
dt1=self.g_dataTime[i]
dt0=self.g_dataTime[i-1]
ts0=dt1-dt0
#if (ts0.seconds&0) or (ts0.microseconds&10000): ??
if ts0.total_seconds()&0.01:
fsac.write(str(i)+&/n&)
fsac.close()
if bBin==True:
if bBin==False:
def parseData(self,infoLevel):
self.m_InfoLevel=infoLevel
blocksize=512
self.g_dataDiff=[]
self.g_dataVal=[]
self.g_dataTime=[]
self.g_dataTimeFlag=[]
while True:
recordBin=self.dataBin_all[blocksize*idxData:blocksize*(idxData+1)] #seed=4096,mseed=512
if not recordBin:
print &the end...&
if recordBin[6]=='D':
mrs=mseed_record_parser()
(dataDiff,dataVal,dataTime,dataTimeFlag,result)=mrs.parse(recordBin,infoLevel)
if result==True:
self.g_dataDiff.extend(dataDiff)
self.g_dataVal.extend(dataVal)
self.g_dataTime.extend(dataTime)
self.g_dataTimeFlag.extend(dataTimeFlag)
idxData=idxData+1
self.parse_ok=True
if self.m_InfoLevel==0:
print &record,data count:&,idxData,len(self.g_dataDiff)
def _export_temp(self):
#export to file
if len(self.g_dataDiff)&0 :
f3=open(self.m_mseedfile+&_diff.txt&,&wt&)
for i in range(len(self.g_dataDiff)):
f3.write(str(self.g_dataDiff[i])+'/n')
f3.close()
if len(self.g_dataVal)&0 :
f4=open(self.m_mseedfile+&_value.txt&,&wt&)
for v in range(len(self.g_dataVal)):
f4.write(self.g_dataTimeFlag[v]+& &+self.g_dataTime[v].strftime('%Y-%m-%d %H.%M.%S %f')+& &+str(self.g_dataVal[v])+'/n')
#f4.write(str(self.g_dataVal[v])+'/n')
f4.close()
def plot(self):
pylab.figure(1)
npts=len(self.g_dataVal)
pylab.plot(range(1,npts+1),self.g_dataVal,linewidth=0.3)
pylab.savefig(self.m_mseedfile+&.0.png&)
pylab.show()if __name__=='__main__': ''' g_InfoLevel: 0=rec count, 1=rec-index,net, 2=frame0,x0,xn, 3=
''' g_InfoLevel=0
mseed=mseed_reader()
sFile='d://data_seis//BJ.BBS.00.BHE.D.; mseed.read(sFile) mseed.parseData(g_InfoLevel) #mseed._export_temp() #mseed.plot() sSacFile=&d://1.sac& dt0=datetime.datetime(,0,0,1) dt1=datetime.datetime(,23,59,59) bBin=True mseed.exportSacFile(sSacFile,dt0,dt1,bBin)
最新教程周点击榜
微信扫一扫

我要回帖

更多关于 村官是什么 的文章

 

随机推荐