Python错误情况该怎样做,解决方法是什么
Admin 2022-08-27 群英技术资讯 785 次浏览
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
import multiprocessing as mp import time from urllib.request import urlopen,urljoin from bs4 import BeautifulSoup import re base_url = "https://morvanzhou.github.io/" #crawl爬取网页 def crawl(url): response = urlopen(url) time.sleep(0.1) return response.read().decode() #parse解析网页 def parse(html): soup = BeautifulSoup(html,'html.parser') urls = soup.find_all('a',{"href":re.compile('^/.+?/$')}) title = soup.find('h1').get_text().strip() page_urls = set([urljoin(base_url,url['href'])for url in urls]) url = soup.find('meta',{'property':"og:url"})['content'] return title,page_urls,url unseen = set([base_url]) seen = set() restricted_crawl = True pool = mp.Pool(4) count, t1 = 1, time.time() while len(unseen) != 0: # still get some url to visit if restricted_crawl and len(seen) > 20: break print('\nDistributed Crawling...') crawl_jobs = [pool.apply_async(crawl, args=(url,)) for url in unseen] htmls = [j.get() for j in crawl_jobs] # request connection print('\nDistributed Parsing...') parse_jobs = [pool.apply_async(parse, args=(html,)) for html in htmls] results = [j.get() for j in parse_jobs] # parse html print('\nAnalysing...') seen.update(unseen) # seen the crawled unseen.clear() # nothing unseen for title, page_urls, url in results: print(count, title, url) count += 1 unseen.update(page_urls - seen) # get new url to crawl print('Total time: %.1f s' % (time.time()-t1)) # 16 s !!!
import multiprocessing as mp import time from urllib.request import urlopen,urljoin from bs4 import BeautifulSoup import re base_url = "https://morvanzhou.github.io/" #crawl爬取网页 def crawl(url): response = urlopen(url) time.sleep(0.1) return response.read().decode() #parse解析网页 def parse(html): soup = BeautifulSoup(html,'html.parser') urls = soup.find_all('a',{"href":re.compile('^/.+?/$')}) title = soup.find('h1').get_text().strip() page_urls = set([urljoin(base_url,url['href'])for url in urls]) url = soup.find('meta',{'property':"og:url"})['content'] return title,page_urls,url def main(): unseen = set([base_url]) seen = set() restricted_crawl = True pool = mp.Pool(4) count, t1 = 1, time.time() while len(unseen) != 0: # still get some url to visit if restricted_crawl and len(seen) > 20: break print('\nDistributed Crawling...') crawl_jobs = [pool.apply_async(crawl, args=(url,)) for url in unseen] htmls = [j.get() for j in crawl_jobs] # request connection print('\nDistributed Parsing...') parse_jobs = [pool.apply_async(parse, args=(html,)) for html in htmls] results = [j.get() for j in parse_jobs] # parse html print('\nAnalysing...') seen.update(unseen) # seen the crawled unseen.clear() # nothing unseen for title, page_urls, url in results: print(count, title, url) count += 1 unseen.update(page_urls - seen) # get new url to crawl print('Total time: %.1f s' % (time.time()-t1)) # 16 s !!! if __name__ == '__main__': main()
综上可知,就是把你的运行代码整合成一个函数,然后加入
if __name__ == '__main__': main()
这行代码即可解决这个问题。
python报错:RuntimeError:fails to pass a sanity check due to a bug in the windows runtime这种类型的错误
1.当前的python与numpy版本之间有什么问题,比如我自己用的python3.9与numpy1.19.4会导致这种报错。
2.numpy1.19.4与当前很多python版本都有问题。
在File->Settings->Project:pycharmProjects->Project Interpreter下将numpy版本降下来就好了。
1.打开interpreter,如下图:
2.双击numpy修改其版本:
3.勾选才能修改版本,将需要的低版本导入即可:
弄完了之后,重新运行就好。
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:mmqy2019@163.com进行举报,并提供相关证据,查实之后,将立刻删除涉嫌侵权内容。
猜你喜欢
本文主要介绍了python中ndarray数组的索引和切片的使用,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学习学习吧
这篇文章主要介绍了Python利用numpy实现三层神经网络的示例代码,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友们下面随着小编来一起学习学习吧
这篇文章主要介绍了如何利用Python实现层次性数据和闭包性质,文中的示例代码讲解详细,对我们学习Python有一定帮助,需要的可以了解一下
在Python 中,and 和 or 执行布尔逻辑演算,如你所期待的一样,但是它们并不返回布尔值;而是,返回它们实际进行比较的值之一。一、and
这篇文章主要介绍了解决python subprocess参数shell=True踩到的坑,具有很好的参考价值,希望对大家有所帮助。一起跟随小编过来看看吧
成为群英会员,开启智能安全云计算之旅
立即注册Copyright © QY Network Company Ltd. All Rights Reserved. 2003-2020 群英 版权所有
增值电信经营许可证 : B1.B2-20140078 粤ICP备09006778号 域名注册商资质 粤 D3.1-20240008