If you use youtube-dl to download playlists, the download speed will often slow to a crawl. You can get around that with two methods, 1) keep the terminal where the script is running open, or 2) issue the command using the screen tool, e.g. "screen sudo youtube-dl , then just pressing Ctrl-A-D, closing the window (Putty), and going about your merry way.
Tuesday, February 27, 2018
Website temporarily down
www.yosefkerzner.com is temporarily offline due to being used in a DNS tunneling test. The test was successful and I will eventually post about it.
As such, I can't share a script written today for removing duplicates from wordlists for hashcat.
1. awk '!seen[$0]++' filename is nice and all, but it uses all the RAM available and slows down considerably once a file is more than 600 MB, and basically never finishes after that point.
2. sort -u -o uses 60% of available RAM and all CPUs, and takes possibly three times as long as option 1.
I wrote a script to split the input file into 160-megabyte pieces, which seems like a good best-speed size and limits RAM usage to under 3 GB per sorting. Option 1 command is then run on each piece, then they get concatenated back into a single large file.
First benchmarks (i5 2310):
On a 3.5 GB wordlist (which was already deduplicated mind you), the script took 7.15 minutes to complete.
On an 8.3 GB wordlist (mostly deduplicated), the script took 19.5 minutes to complete.
Deduping rockyou.txt takes 18 seconds.
Update: Site's back up, here's the script.
As such, I can't share a script written today for removing duplicates from wordlists for hashcat.
1. awk '!seen[$0]++' filename is nice and all, but it uses all the RAM available and slows down considerably once a file is more than 600 MB, and basically never finishes after that point.
2. sort -u
I wrote a script to split the input file into 160-megabyte pieces, which seems like a good best-speed size and limits RAM usage to under 3 GB per sorting. Option 1 command is then run on each piece, then they get concatenated back into a single large file.
First benchmarks (i5 2310):
On a 3.5 GB wordlist (which was already deduplicated mind you), the script took 7.15 minutes to complete.
On an 8.3 GB wordlist (mostly deduplicated), the script took 19.5 minutes to complete.
Deduping rockyou.txt takes 18 seconds.
Update: Site's back up, here's the script.
Friday, February 23, 2018
Script to download daily Performance Today episodes
Call it whatever, such as pt.sh, and put it in cron as follows:
$
0 9 * * 1-6 root /home/pi/Programs/pt.sh
#!/bin/bash
# Hour 1
wget -P /mnt/usb/Music/Radio/Performance_Today/ \
"download.publicradio.org/performance_today/`date +%Y/%m/%d`/pt1_`date +%Y%m%d`_128.mp3"
# Hour 2
wget -P /mnt/usb/Music/Radio/Performance_Today/ \
"download.publicradio.org/performance_today/`date +%Y/%m/%d`/pt2_`date +%Y%m%d`_128.mp3"
Tuesday, February 20, 2018
Silent new feature in Firefox - First Party Isolation
https://news.slashdot.org/story/17/11/20/1713235/another-tor-browser-feature-makes-it-into-firefox-first-party-isolation
Another item to add to the list
Another item to add to the list
Monday, February 05, 2018
Cool tool for SMB searching
SMBMap - on my last engagement this would have come in handy. Look you can even search by regex patterns across a list of IPs or file shares. Now script it for keywords like "legal", "accounting", "sales", "passwords","termination","future", etc.
Subscribe to:
Posts (Atom)