|
Python小白,分享一个网站信息搜集的思路,并用python写出脚本,仅供参考~
搜集常见的备份文件后缀类型,以及一些固定的文件名,可自由添加修改:
[pre]
#coding=utf-8
# 根据URL生成特定目标网站备份文件猜测字典
suffixList = ['.rar','.zip','.sql','.gz','.tar','.bz2','.tar.gz','.bak','.dat']
keyList=['install','INSTALL','index','INDEX','ezweb','EZWEB','flashfxp','FLASHFXP']
# 请输入目标URL
print "Please input the URL:"
url = raw_input()
if (url[:5] == 'http:'):
url = url[7:].strip()
if (url[:6] == 'https:'):
url = url[8:].strip()
numT = url.find('/')
if(numT != -1):
url = url - url[:numT]
# 根据URL,推测一些针对性的文件名:
num1 = url.find('.')
num2 = url.find('.',num1 + 1)
keyList.append(url[num1 + 1:num2])
keyList.append(url[num1 + 1:num2].upper())
keyList.append(url)
keyList.append(url.upper())
keyList.append(url.replace('.','_')) # www_test_com
keyList.append(url.replace('.','_').upper())
keyList.append(url.replace('.','')) # wwwtestcom
keyList.append(url.replace('.','').upper())
keyList.append(url[num1 + 1:]) # test.com
keyList.append(url[num1 + 1:].upper())
keyList.append(url[num1 + 1:].replace('.','_')) # test_com
keyList.append(url[num1 + 1:].replace('.','_').upper())
# 生成字典列表,并写入txt文件:
tempList =[]
for key in keyList:
for suff in suffixList:
tempList.append(key + suff)
fobj = open("success.txt",'w')
for each in tempList:
each ='/' + each
fobj.write('%s%s' %(each,'\n'))
fobj.flush()
print 'OK!'
[/pre]测试效果如下:
|
本帖子中包含更多资源
您需要 登录 才可以下载或查看,没有账号?注册
×
|