Skip to content
forked from s0md3v/Photon

Incredibly fast crawler which extracts urls, emails, files, website accounts and much more.

Notifications You must be signed in to change notification settings

exploitd/Photon

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 
 
 
 
 

Repository files navigation

Photon

Photon is a lightning fast web crawler which extracts URLs, files, intel & endpoints from a target.

demo

Yep, I am using 100 threads and Photon won't complain about it because its in Ninja Mode 😎

Why Photon?

Not Your Regular Crawler

Crawlers are supposed to recursively extract links right? Well that's kind of boring so Photon goes beyond that. It extracts the following information:

  • URLs (in-scope & out-of-scope)
  • URLs with parameters (example.com/gallery.php?id=2)
  • Intel (emails, social media accounts, amazon buckets etc.)
  • Files (pdf, png, xml etc.)
  • JavaScript files & Endpoints present in them
  • Strings based on custom regex pattern

The extracted information is saved in an organized manner.
save demo

Intelligent Multi-Threading

Here's a secret, most of the tools floating on the internet aren't properly multi-threaded even if they are supposed to. They either supply a list of items to threads which results in multiple threads accessing the same item or they simply put a thread lock and end up rendering multi-threading useless.
But Photon is different or should I say "genius"? Take a look at this and decide yourself.

Ninja Mode

In Ninja Mode, 3 online services are used to make requests to the target on your behalf.
So basically, now you have 4 clients making requests to the same server simultaneously which gives you a speed boost, minimizes the risk of connection reset as well as delays requests from a single client.
Here's a comparison generated by Quark where the lines represent threads:

ninja demo

Usage

-u --url

Run Photon against a single website.

python photon.py -u http://example.com

Specifying a URL with it's schema i.e. http(s):// is optional but you must add www. if the website has it.

Tip 💡 : If you feel like the crawling is taking too long or you just don't want to crawl anymore, just press ctrl + c in your terminal and Photon will skip the rest of URLs.

-l --level

Depth of crawling.

python photon.py -u http://example.com -l 3

Default Value: 2

-d --delay

You can keep a delay between requests made to the target by specifying the time in seconds.

python photon.py -u http://example.com -d 1

Default Value: 0

-t --threads

Number of threads to use.

python photon.py -u http://example.com -t 10

Default Value: 2

Tip 💡 : The optimal number of threads depends on your connection speed as well as nature of the target server. If you have a decent network connection and the server doesn't have any rate limiting in place, you can use up to 100 threads.

-c --cookie

Cookie to send.

python photon.py -u http://example.com -c "PHPSSID=821b32d21"

-n --ninja

Toggles Ninja Mode on/off.

python photon.py -u http://example.com --ninja

Default Value: False

Tip 💡 : Ninja mode uses the following websites to make requests on your behalf:

Please help me add more "APIs" to reduce load on their servers and turn off this mode whenever not required.

-s --seeds

Lets you add custom seeds, seperated by commas.

python photon.py -u http://example.com -s "http://example.com/portals.html,http://example.com/blog/2018"

-r --regex

Specify custom regex pattern to extract strings.

python photon.py -u http://example.com -r "\d{10}"

The strings extracted using the custom regex pattern are saved in custom.txt.

License

Photon is licensed under GPL v3.0 license.

About

Incredibly fast crawler which extracts urls, emails, files, website accounts and much more.

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%