Introducing Spyder: A Comprehensive Domain Scanning Tool
Greetings, fellow security enthusiasts! Today, I’m thrilled to share a new tool that I’ve been working on – Spyder. As an ethical hacker, I understand the importance of efficiently scanning domains to uncover directories, parameters, endpoints, and APIs. Spyder is designed to make this process faster and more effective. Let’s dive into what Spyder can do and how you can use it to enhance your web reconnaissance efforts.
Why Spyder?
Web reconnaissance is a crucial phase in ethical hacking, providing insights into the structure and potential vulnerabilities of a target. Spyder aims to streamline this phase by offering a robust, flexible, and efficient tool for scanning domains.
Key Features
- Recursive URL Scanning: Automatically discover links, directories, parameters, and endpoints.
- File-Based URL Validation: Validate URLs from a specified file (
dir.txt
), ensuring comprehensive coverage. - Multithreading Support: Perform concurrent scanning with adjustable thread count for optimal performance.
- Colorized Output: Enhance readability with color-coded output.
- Output to File: Save all discovered URLs to a
results.txt
file for easy reference.
Installation
Getting started with Spyder is straightforward. Clone the repository and install the required dependencies:
git clone https://github.com/mrTr1cky/spyder.git
cd spyderpip install -r requirements.txt
Usage
Spyder offers multiple ways to input domains, making it versatile for different scenarios.
Scanning a Single Domain
To scan a single domain, use the following command:
bashCopy codepython3 spyder.py -d <domain> [-t <threads>]
Example:
bashCopy codepython3 spyder.py -d http://example.com -t 50
Scanning Multiple Domains
To scan multiple domains from a file, use:
python3 spyder.py -l <domains.txt> [-t <threads>]
Example:
python3 spyder.py -l domains.txt -t 50
Piping Input from Other Tools
You can also pipe input from tools like subfinder
:
subfinder -d <domain> | python3 spyder.py -t 100
Command-Line Options
-d, --domain <domain>
: Single domain to scan-l, --list <domains.txt>
: File containing a list of domains to scan-t, --threads <threads>
: Number of threads (default: 10)-f, --file <dir.txt>
: Directory file to check for valid URLs (default: dir.txt)
Example Output
Here is a sample output from running Spyder:
lessCopy code ____ _ ____
/ ___|___ _ __ _ _| |_ / ___| ___ __ _ _ __
| | / _ \| '_ \| | | | __| | | _ / __/ _` | '_ \
| |__| (_) | | | | |_| | |_ | |_| | (_| (_| | | | |
\____\___/|_| |_|\__,_|\__| \____|\___\__,_|_| |_|
Spyder - Domain Scanner
Author: madtiger
Telegram: @DevidLuce
Address: [Uganda]
[+] Discovered URL: http://example.com/about
[+] Discovered URL: http://example.com/contact
...
Contribution and Feedback
Spyder is an open-source project, and contributions are welcome! If you have any suggestions for improvements, find bugs, or want to add new features, feel free to open an issue or submit a pull request on the GitHub repository.
About the Author
- Name: madtiger
- Telegram: @DevidLuce
- Address: Uganda
I hope you find Spyder as useful as I do in your web reconnaissance efforts. Happy hacking!
Feel free to customize the post further to match your blog’s style and add any additional information you think is relevant.