搜索资源列表
NWebCrawler
- 一款用 C# 编写的网络爬虫。用户可以通过设置线程数、线程等待时间,连接超时时间,可爬取文件类型和优先级、下载目录等参数,获得网络上URL,下载得到的数据存储在数据库中。-Using a web crawler written in C#. Users can set the number of threads, thread waiting time, connection time, crawling file types can be Type and priority, the do
web-monitor
- 网页监控软件,监控网页的URL历史记录。开发环境VC++6.0-monitor url history
HttpSniffer
- httpsniffer抓取url 下载电影-httpsniffer
sy1
- C++编写的小程序,输入ip地址可以转换成url,相反输入url也能转换成ip地址,同时,还能下载制定url下的任何一个子网页!-sy
SnatchUrlContent
- 网络爬虫,通过输入地址,可获取页面的信息,再通过程序中解析的方法,将地址内容、要爬的首元素名称、尾元素名称输入到方法中,可获取想要得到的内容-snatch URL Content
getPagemfc
- vc++6/mfc写的获取指定url页码源码的小工具。-vc++6/mfc write access to the specified page source url gadget.
URLDecEnc
- URL 加解码工具 例如:http://www.pudn.com/a.b_jsp?id=2324 URL编码后为 http 3A//www.pudn.com/a.b_jsp 3Fid 3D2324 -URL Encoder/Decoder Example: http://www.pudn.com/ is Encoded as http 3A//www.pudn.com/a.b_jsp 3Fid 3D2324
AnalyseUrl
- 可以自动分析出网页页面上所有的url链接地址-the tool is used to analyse the url address in web page!
GetPage_NetWrk
- Networking with Java (Reading the URL Page Content using Java) : Source Code
a
- 輸入plurk頭像網址,就可以觀看此帳好以前的頭像-Enter the URL plurk picture, you can see the picture before this account is good
SendMsg_http
- socket 提交数据到url,实现form表单的post提交功能-socket to submit data to the url, to achieve a form of post form submit function
a-simple-phishing
- 能识别简单的URL,分析返回不同代码 表示不同判断结果-Able to identify a simple URL, and return different codes to indicate the different judgment result. . .
ThreadCrawler
- 用java编写的网络爬虫程序,输入起始url和想要爬取的页面个数,就可以开始爬取.-Enter the start url web crawler program written in Java, and want to crawling the page number, you can begin crawling.
Crawler
- 一个java编写的简单爬虫程序,可以实现通过Socket保存html网页 去乱码 存储当前页面URL 自动顺序抓取页面-A java simple crawler can be achieved by Socket save html web pages garbled storage automatic sequence of the current page URL to fetch page.
behavor_record
- 文件包含了两处代码,可以实现对上网用户的行为记录的监控,只要用户用浏览器浏览网页,就可以记录下用户的登录的网址等信息。-File contains two code can be achieved on the Internet to monitor user behavior records, as long as users use the web browser, you can record the user' s login URL and other information.
mySpider
- java写的爬虫抓取指定url的内容,内容处理部分没有写上去,因为内容处理个人处理方式不同,jsoup或Xpath都行,只有源码,需修改相关参数- java write reptiles crawl the contents of the specified url, content processing section is not written up, because the content deal with different personal approach, jsoup or
blueleech
- 依据网络爬虫原理来分析和构建基于客户端的网络爬虫工具,通过Java Swing构建可视化客户端,用户可以爬取特定网页内容,同时可以指定过滤条件(比如:过滤URL前缀、后缀或文件扩展名等等),最后将所爬取的网页内容存储到本地。-According to the principle of web crawler to analyze and build based on the client web crawler tool, through the Java Swing to build visu
crawler_gae
- 基于python的网络爬虫,托管于GAE,根据设置爬取指定网络内容,并通过邮箱提示更新,通过修改目标网址和正则匹配,实现订阅无RSS的网站-Python based web crawler, hosted on GAE, crawling web content according to the specified settings and prompt updates via e-mail, by modifying the destination URL and a regular matc
speedtest
- 这是网络测试工具。可以测试特定的speedtest服务器或者所有的服务器的网络传输速度。 用法: -h, help show this help message and exit share 分享你的网速,该命令会在speedtest网站上生成网速测试结果的图片。 simple Suppress verbose output, only show basic information list 根据距离显示speedtest.net的测试服务器列表。 serv
Weibo_spider
- 替换URL,可从指定微博手机版网页(后缀为weibo.cn)抓取评论内容,需先登录微博手机版网页,然后将网站的cookies粘贴到代码指定位置(模拟登录)-Replace URL, can be specified the micro-blog mobile phone version of the page (suffix weibo.cn) grab comments, you need to log on the micro-blog mobile phone version of th
