Kayran is capable of ingesting HAR files to perform deeper crawling activity against your web assets.
HAR – short for HTTP Archive, is a format used for tracking information between a web browser and a website, doing so, will “Assist” Kayran in performing a more efficient Scan.
Using Enumeration – by enabling it, Kayran will begin “Brute Forcing”, overloading the server and will start “Inserting” random Parameters and Paths for testing,
Pay attention – enabling Enumeration will Significantly extend the Scan time.
The “Level Deep” gauge responsible for determining the depth of the Scan in the website, or in other words, how many files and directories related to it will be scanned.
By using the “Single Scan” option, only the given page will be scanned, without taking into account other pages related to it, doing so, will disable the “Crawler”.
A Robots.txt file tells search engine crawlers which URLs the crawler can (or can’t) access on your site.
A Sitemap is an .xml file that lists the URLs for a site.
The ‘Scan Only CVEs‘ option will “tell” Kayran to only search for CVEs in the scanned target.