Today I will tell you about passive information gathering.
Contents:
What is Passive Information Gathering?
Whois Web Sites
Arin
CentralOps
DNSstuff
IHS
MX ToolBox
Natro
NetCraft
Robtex
Whois
Collecting Information from Archive Sites
Archive.org
UKWA
Loc.org
Gathering Information from Search Engines
Google
Bing
Scanning Through Social Media
Twitter
FriendFeed
Facebook
Linkedin
Instagram
What is Passive Information Gathering?
Passive information collection refers to collecting information about the other party over the internet without entering the other party's servers and systems.
After making the definition, let's move on to the techniques we will use now;
Whois Websites:
Whois are websites where we can make DNS inquiries of the other party. Now I will give examples of these sites and show you how they are used.
Arin:
We enter the site I have provided and an interface like the one below appears. We write the site you want to receive information in the field I selected in black.
After writing, we say "Search". After searching, it gives us d0cuments about the site as below. You can get information by looking at these.
Link: http://www.arin.net
Central OPS:
When the site I provided is clicked, our interface opens as follows. You can write the IP address or the link of the site in the field it shows.
After writing, we say "go" and we come across information.
Site link: https://centralops.net/co/
DNSstuff:
We enter the site from the site link I provided below. We click on the "Free Tools" section I marked.
After clicking, it offers us a lot of search options, from here you can browse the target site as you wish.
Site link: https://www.dnsstuff.com
IHS:
We go to the site from the link I gave below. When we go, an interface like the one below welcomes us. We write the link of the site where I marked here.
After typing and saying "Search", the domain information of the site comes before us.
Site Link: https://www.ihs.com.tr/mainMenu.html
MX ToolBox
After logging into the site, we write the site link where I show it. After writing it, we say "MX Lookup".
It makes a wide scan and comes across the information.
Site Link: https://mxtoolbox.com
Natro
After entering the site (since the site is already in Turkish and it is easy to use, you can understand where to write the site directly.) We write the site link and call it Query. After that, he gives us the domain addresses of the site.
Site Link: https://www.natro.com/domain-sorgulama/sonuc
NetCraft
We go to the site from the link I gave and we write the link of the site in the field I show.
After writing it, it shows us the domain addresses in the same way.
Site Link: https://www.netcraft.com
Robtex:
We enter the site from the link I gave you. We write the site link and say "GO". After that, it finds us up to ALEXA data about the site.
Site Link: https://www.robtex.com
Whois:
After logging into the site from the link I gave, we write the site link and scan it. It gives us domain information, although it is not extensive.
Site Link: https://www.whois.com
Gathering Information from Archive Sites:
The advantages of archive sites are that they retain the target site history (version etc.). It is in the interest of those who collect information about the target site to learn about a site's background information. There are many archive sites for this process, but I'll show you a few without further ado.
Archive.org:
After clicking on the link I gave, we log into the site. After entering, write the site link in the field that I selected.
After entering the site, it shows the process from the year the site was established to years and months and days at the bottom. You can get detailed information by clicking on the date you want.
Site Link: web.archive.org
UKWA
We enter the site from the link I gave below. After entering, we write the link of the site where I marked and press the "Search" button and the scan starts.
After searching, she found 260 results for us. It founds 5 pages of results, since 2014.
Site Link: www.webarchive.org.uk/
Loc.org:
We enter the link I gave below and write the site link in the selected place.
Although it is not very detailed, we get some data.
Site Link: webarchive.loc.gov/
Gathering Information from Search Engines:
With some types of searches used in search engines, we can collect information about the target site. Search engines do not contain such information to be taken lightly on this topic.
I will explain based on Google.
Searching for Domains on Google:
Scanning xls file within the website on Google:
Bing
With Bing engine, code ip: scans all indexed sites over ip with the code command.
In this way, we do a search on ip by typing ip: target site address.
As you can see it searched.
Browsing Through Social Media:
You can get information on social media posts (career sites, job postings, posts in celebrations, etc.). You can try the following sites for these;
www.twitter.com
You can get information and personal information since most institutions have twitter accounts.
www.friendfeed
You can get information and personal information since most institutions have accounts.
www.facebook.com
You can get personal information and information as most institutions have Facebook accounts. (Very effective.)
www.linkedin.com
You can get personal information and information since most institutions have a linkledin account. (Many people in the business world have LinkedIn accounts.)
www.instagram.com
You can get information about people with the social media tool, which is perhaps among the most used of our age.
Quotation:
Quote from a member named xMit
This information collection also helps to find the real threads of the sites in the cloudflare 'system, it is useful to add it to the end of the topic.
Source: https://www.turkhackteam.org/web-server-guvenligi/1829772-pasif-bilgi-toplama-whitered.html
Translator: @Thekoftte
Contents:
What is Passive Information Gathering?
Whois Web Sites
Arin
CentralOps
DNSstuff
IHS
MX ToolBox
Natro
NetCraft
Robtex
Whois
Collecting Information from Archive Sites
Archive.org
UKWA
Loc.org
Gathering Information from Search Engines
Bing
Scanning Through Social Media
FriendFeed
What is Passive Information Gathering?
Passive information collection refers to collecting information about the other party over the internet without entering the other party's servers and systems.
After making the definition, let's move on to the techniques we will use now;
Whois Websites:
Whois are websites where we can make DNS inquiries of the other party. Now I will give examples of these sites and show you how they are used.
Arin:
We enter the site I have provided and an interface like the one below appears. We write the site you want to receive information in the field I selected in black.
After writing, we say "Search". After searching, it gives us d0cuments about the site as below. You can get information by looking at these.
Link: http://www.arin.net
Central OPS:
When the site I provided is clicked, our interface opens as follows. You can write the IP address or the link of the site in the field it shows.
After writing, we say "go" and we come across information.
Site link: https://centralops.net/co/
DNSstuff:
We enter the site from the site link I provided below. We click on the "Free Tools" section I marked.
After clicking, it offers us a lot of search options, from here you can browse the target site as you wish.
Site link: https://www.dnsstuff.com
IHS:
We go to the site from the link I gave below. When we go, an interface like the one below welcomes us. We write the link of the site where I marked here.
After typing and saying "Search", the domain information of the site comes before us.
Site Link: https://www.ihs.com.tr/mainMenu.html
MX ToolBox
After logging into the site, we write the site link where I show it. After writing it, we say "MX Lookup".
It makes a wide scan and comes across the information.
Site Link: https://mxtoolbox.com
Natro
After entering the site (since the site is already in Turkish and it is easy to use, you can understand where to write the site directly.) We write the site link and call it Query. After that, he gives us the domain addresses of the site.
Site Link: https://www.natro.com/domain-sorgulama/sonuc
NetCraft
We go to the site from the link I gave and we write the link of the site in the field I show.
After writing it, it shows us the domain addresses in the same way.
Site Link: https://www.netcraft.com
Robtex:
We enter the site from the link I gave you. We write the site link and say "GO". After that, it finds us up to ALEXA data about the site.
Site Link: https://www.robtex.com
Whois:
After logging into the site from the link I gave, we write the site link and scan it. It gives us domain information, although it is not extensive.
Site Link: https://www.whois.com
Gathering Information from Archive Sites:
The advantages of archive sites are that they retain the target site history (version etc.). It is in the interest of those who collect information about the target site to learn about a site's background information. There are many archive sites for this process, but I'll show you a few without further ado.
Archive.org:
After clicking on the link I gave, we log into the site. After entering, write the site link in the field that I selected.
After entering the site, it shows the process from the year the site was established to years and months and days at the bottom. You can get detailed information by clicking on the date you want.
Site Link: web.archive.org
UKWA
We enter the site from the link I gave below. After entering, we write the link of the site where I marked and press the "Search" button and the scan starts.
After searching, she found 260 results for us. It founds 5 pages of results, since 2014.
Site Link: www.webarchive.org.uk/
Loc.org:
We enter the link I gave below and write the site link in the selected place.
Although it is not very detailed, we get some data.
Site Link: webarchive.loc.gov/
Gathering Information from Search Engines:
With some types of searches used in search engines, we can collect information about the target site. Search engines do not contain such information to be taken lightly on this topic.
I will explain based on Google.
Searching for Domains on Google:
Kod:
site:xyz.com
Scanning xls file within the website on Google:
Kod:
site:xyz.com Filetype:xls
Bing
With Bing engine, code ip: scans all indexed sites over ip with the code command.
In this way, we do a search on ip by typing ip: target site address.
As you can see it searched.
Browsing Through Social Media:
You can get information on social media posts (career sites, job postings, posts in celebrations, etc.). You can try the following sites for these;
www.twitter.com
You can get information and personal information since most institutions have twitter accounts.
www.friendfeed
You can get information and personal information since most institutions have accounts.
www.facebook.com
You can get personal information and information as most institutions have Facebook accounts. (Very effective.)
www.linkedin.com
You can get personal information and information since most institutions have a linkledin account. (Many people in the business world have LinkedIn accounts.)
www.instagram.com
You can get information about people with the social media tool, which is perhaps among the most used of our age.
Quotation:
Quote from a member named xMit
This information collection also helps to find the real threads of the sites in the cloudflare 'system, it is useful to add it to the end of the topic.
Source: https://www.turkhackteam.org/web-server-guvenligi/1829772-pasif-bilgi-toplama-whitered.html
Translator: @Thekoftte
Moderatör tarafında düzenlendi: