Notice
Recent Posts
Recent Comments
Link
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 |
Tags
- urllib
- 유니티
- node.js
- PyQt
- GIT
- MySQL
- javascript
- 리눅스
- ASP
- flutter
- Excel
- MS-SQL
- mssql
- 함수
- 라즈베리파이
- python
- Unity
- 맛집
- PER
- sqlite
- pandas
- Linux
- PyQt5
- ubuntu
- port
- swift
- 날짜
- tensorflow
- IOS
- 다이어트
Archives
아미(아름다운미소)
[python]crawler 샘플 본문
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 | # encoding: utf-8 import codecs from collections import deque import urllib2, urlparse import socket from BeautifulSoup import BeautifulSoup def get_links(uri, startswith = ""): """Returns list of referenced URIs (without duplicates) found in the document returned for the input URI""" results = set () try : page = urllib2.urlopen(uri) soup = BeautifulSoup(page) for link in soup.findAll( 'a' ): # <a href=""...> try : link = link[ 'href' ] if not link.startswith( "javascript:" ) \ and not link.startswith( "mailto:" ) \ and not link.startswith( "skype:" ): link = urlparse.urljoin(uri, link) # expand relative URIs if link.startswith(startswith): results.add(link) except KeyError: print "Missing href attribute in %s" % link except : pass results = list (results) return results def crawl(seed_uris, timeout = 5 , limit = 1000 , debug = True , startswith = ""): """Returns a list of URIs found by following all links from the list of seed URIs given""" queue = deque(seed_uris) results = seed_uris[:] socket.setdefaulttimeout(timeout) # set time-out to 5 seconds while len (queue)> 0 and len (results)<limit: uri = queue.popleft() if debug: print "Analyzing %s" % uri links = get_links(uri,startswith) new_links = [uri for uri in links if uri not in results] if debug: print "%i links found, of which %i are new" % ( len (links), len (new_links)) results.extend(new_links) queue.extend(new_links) if debug: print "Status: %i URIs known, %i URIs in queue" % ( len (results), len (queue)) if debug: print "Completed." print "URI count after analysing all linked pages: %i distinct URLs" % len (results) results.sort() return (results) def main(): # SEED_URIS = ["http://www.heppnetz.de/", "http://www.unibw.de"] results = crawl(seed_uris = SEED_URIS, limit = 1000 , startswith = "") f = codecs. open ( 'uris.csv' , 'wt' , 'utf-8' ) for line in results: f.write(line + "\n" ) f.close() if __name__ = = '__main__' : main() |
'랭귀지 > python' 카테고리의 다른 글
[BeautifulSoup]웹에서 정보 가져오기 (0) | 2017.12.22 |
---|---|
[python]방대한 XLS (Excel) 파일을 읽고(쓰기) (0) | 2017.12.21 |
Python 에서 Mysql 사용 하기 (0) | 2017.12.20 |
urllib2: 무엇을 받고 있는 거지? (0) | 2017.12.19 |
urllib2로 에러 처리하는 법 (0) | 2017.12.19 |