J Wolfgang Goerlich's thoughts on Information Security
Surviving the Robot Apocalpyse

By wolfgang. 28 January 2013 11:14

I am on the latest BSides Chicago podcast episode: The Taz, the Wolf, and the exclusives. Security Moey interviewed me about a new talk I am developing for Chicago, titled Surviving the Robot Apocalypse.

The inspiration comes from Twitter. Tazdrumm3r once said, "@jwgoerlich <~~ My theory, he’s a Terminator from the future programmed 4 good 2 rally & lead the fight against SkyNet, MI branch." A few weeks back, we were discussing the robotics articles I wrote for Servo magazine and some 'bots I built with my son. To which, Infosec_Rogue said, "@jwgoerlich not only welcomes our robot overlords, he helped create them."

I can roll with that. Let's do it.

The goal of this session is to cover software security and software vulnerabilities in an enjoyable way. Think Naked Boulder Rolling and Risk Management. Unless, of course, you didn't enjoy Naked Boulder Rolling. In that case, imagine some other talk I gave that you enjoyed. Or some other talk someone else gave that you enjoyed. Yeah. Pick one. Got it? Surviving is like your favorite talk, only for software security principles and their applicability to InfoSec.

I hope to see you in Chicago.

Surviving the Robot Apocalpyse

Abstract. The robots are coming to kill us all. That, or the zombies. One way or the other, humanity stands on the brink. While many talks have focused on surviving the zombie apocalypse, few have given us insights into how to handle the killer robots. This talk seeks to fill that void. By exploring software security flaws and vulnerabilities, we will learn ways to bypass access controls, extract valuable information, and cheat death. Should the unthinkable happen and the apocalypse not come, the information learned in this session can also be applied to protecting less-than-lethal software. At the end of the day, survival is all about the software.



Sticky PowerShell and Command Prompt settings

By wolfgang. 27 January 2013 08:36

I have been doing more with PowerShell recently. One thing that troubled me was that my color, font, and size settings were not sticky. I would get PowerShell configured just right. But when I would launch PowerShell from the cmdline or use it to debug in Visual Studio, my personal settings would be lost.

The cause is that the settings in a console session are applied to the shortcut link. The settings are not applied universally to the user profile. Suppose you start a console and click the console icon to set the font, layout, and color. These settings are saved in the shortcut that was used to launch that particular console.

To persist across sessions, you need to update the default values in your user profile. This is stored in the registry under HKEY_CURRENT_USER\Console.

The command prompt is under: [HKEY_CURRENT_USER\Console\%SystemRoot%_system32_cmd.exe]

The PowerShell prompt is under: [HKEY_CURRENT_USER\Console\%SystemRoot%_system32_WindowsPowerShell_v1.0_powershell.exe]

Here are the values:

ColorTable00 == dword storing the background color in blue, green, red.
ColorTable07 == dword storing the foreground color in blue, green, red.
FaceName == string storing the name of the font, such as "Lucida Console".
FontFamily == dword set to 0x36 for fixed width fonts.
FontSize == dword set to 00, then the font size in hex, then 0000. For example, 10-point (0xA) font is 000A0000.
FontWeight == a dword defaulting to 0x190.
WindowSize == a dword in coord format: 2-byte height and 2-byte width. For example, 0028 0078 is 40 characters tall by 120 characters wide.
ScreenBufferSize == dword in coord format. Note the WindowSize cannot exceed the ScreenBufferSize.

I prefer a retro green screen console with large fonts and a large buffer. Here are the values from my registry:

Windows Registry Editor Version 5.00

"FaceName"="Lucida Console"


Systems Engineering

Grep on Windows compared to OSX

By wolfgang. 26 January 2013 07:55

Chapter 2: In which Mark gives Wolfgang OSX envy.

Earlier this week, I published a speed test comparing Gnu grep and PowerShell Select-String. Mark Boltz (@mtezna) posted a follow-up titled The grep Project over on Mark's blog, The Tao of the Net. OSX appears to offer a substantial performance increase over Windows when executing both Gnu grep and BSD grep.

Check it out. Mark and I are comparing notes.



Systems Engineering

Grep versus Select-String Speedtest

By wolfgang. 21 January 2013 18:28

How fast is grep? Reasonably fast. Over the weekend, we were discussing on Twitter a post from Mike Haertel. Mike was the original developer of GNU grep. In the post titled "why GNU grep is fast", Mike described the algorithm grep uses. He also provided this excellent advice: "#1 trick: GNU grep is fast because it AVOIDS LOOKING AT EVERY INPUT BYTE. #2 trick: GNU grep is fast because it EXECUTES VERY FEW INSTRUCTIONS FOR EACH BYTE that it *does* look at." "The key to making programs fast is to make them do practically nothing."

This had me wondering about how PowerShell's Select-String stacks up. Richard Minerich (@rickasaurus) brought up a good point: compiled C code is generally faster than C# code. As PowerShell rests on .NET, we can make an assumption that grep should be faster than Select-String. Mark Boltz (@mtezna) suggested running several tests of both and taking an average to get a sense of how Select-String stacks up.

If Select-String was significantly slower, then a good weekend project might be to write a faster parser. I do have the occassional free weekend and I was very curious. Today, I performed such a test. Read on to find out what I learned.

Test Parameters

I generated sample files using a sample dictionary file. Each file contained sentences of random length (5-25 random words). One in ten sentences contained the word "key" at a random location within the sentence. There were eleven sample files: 1,000 sentences, 10,000 sentences, 20,000 sentences, and so on to 100,000 sentences. (You can download the resulting test files here: grep-select-string-test.zip).

Each search was performed seven times. System.Diagnostics.Stopwatch was used as the time source. The total milliseconds elapsed was used as the time measure. The minimum time and the maximum time were dropped. The time recorded was the average of the remaining five tests.

I used the latest GNU grep for Windows, version 4.2.1 released 2012-12/18. The command executed for grepping the file was: grep "key" "file1000.txt"

For PowerShell, I used version 3 (build 6.2.9200.16398).  The PowerShell equivilant of the grep command was: Select-String -Pattern "key" -Path .\file1000.txt

The host operating system is Windows 2008 Server R2 SP1 with the latest hotfixes.


In the following graph, the number of lines in the sample files is plotted on the x-axis. The total time to search the sample file is plotted on the y-axis in milliseconds.

Lines Grep Select-String
1,000 248.2245 29.8712
10,000 1,907.8156 299.4792
20,000 4,013.5332 643.2678
30,000 6,689.0545 1,036.1867
40,000 8,419.1654 1,319.9755
50,000 10,870.3179 1,662.6931
60,000 12,487.7127 1,955.2525
70,000 15,048.1311 2,344.9599
80,000 16,623.6946 2,594.3496
90,000 16,775.1033 2,995.7644
100,000 18,697.6675 3,303.2918

The bottom line? Select-String is significantly faster than GNU grep on Windows Server 2008 R2. PowerShell is closing the gap between Linux and Windows shell environments.


Systems Engineering

Out and About: Incident Management with PowerShell

By wolfgang. 18 January 2013 20:24

Matt Johnson and I will be presenting on incident management and PowerShell at next month's Motor City ISSA. This is part of the PoshSec initiative.

Incident Management with PowerShell

Have you seen the latest scare? The Java 0-day exploit that allows attackers to execute code on your computer? Now scares come and scares go. But let’s suppose for a moment your servers were infected using this exploit. How could your administrators detect the attack? How would you recover? Even better, what could have been done beforehand and how could you prevent this from happening again?

Incident Management, of course, is the security practice that seeks to answer these questions. In Windows server environments, PowerShell is the way Incident Management gets put into practice. This session will introduce InfoSec professionals and systems administrators to PowerShell’s security features. We will provide an overview of Incident Management and PowerShell. Then, using the Java 0-day exploit as a driver, we will walk through the lifecycle of an incident. The audience will leave with information on the policy and practice of managing security incidents in Windows with PowerShell.

J Wolfgang Goerlich is the information systems and security manager for a Michigan-based financial institution. He is responsible for managing the software development and network operations team. Wolfgang's background is in architecting new systems, securing existing systems, and optimizing performance and recovery. With over a decade of experience, Mr. Goerlich has a solid understanding of both the IT infrastructure and the business it enables.

Matt Johnson is a Systems Analyst from the Metro Detroit area. As an avid technologist and tinkerer, he is always looking to understand and improve the world around him. Matt has a strong interest in automation and the use of PowerShell. Matt founded the SE Michigan PowerShell User Group and was a judge for the last two years for the Microsoft Scripting Games. He holds numerous certifications and writes a blog at http://www.mwjcomputing.com. You can follow him on twitter by following @mwjcomputing.

Motor City ISSA. February 21st, 2013. Livonia, MI.


Privilege management at CSO

By wolfgang. 17 January 2013 04:01

Least Privilege Management (LPM) is in the news ...

The concept has been around for decades. J. Wolfgang Goerlich, information systems and information security manager for a Michigan-based financial services firm, said it was, "first explicitly called out as a design goal in the Multics operating system, in a paper by Jerome Saltzer in 1974."

But, it appears that so far, it has still not gone mainstream. Verizon's 2012 Data Breach Investigations Report found that, of the breaches it surveyed, 96% were not highly difficult for attackers and 97% could have been avoided through simple or intermediate controls.

"In an ideal world, the employee's job description, system privileges, and available applications all match," Goerlich said. "The person has the right tools and right permissions to complete a well-defined business process."

"The real world is messy. Employees often have flexible job descriptions. The applications require more privileges than the business process requires," he said. "[That means] trade-offs to ensure people can do their jobs, which invariably means elevating the privileges on the system to a point where the necessary applications function. But no further."

Read the full article at CSO: Privilege management could cut breaches -- if it were used


Security | Systems Engineering

Considerations when testing Denial of Service

By wolfgang. 16 January 2013 15:59

Stress-testing has long been a part of every IT Operations toolkit. When a new system goes in, we want to know where the weaknesses and bottlenecks are. Stress-testing is the only way.

Now, hacktivists have been providing stress-tests for years in the form of distributed denial of service attacks. Such DDoS are complementary with just about any news event. As moves are underway to make DDoS a form of free speech, we can expect more in the future.

With that as a background, I have been asked recently for advice on how to test for a DDoS. Here are some considerations.

First, test on the farthest router away that you own. The “you own” part is essential. Let’s not run a DDoS across the public Internet or even across your hosting provider’s network. That is a quick way to run afoul of terms of service and, potentially, the law. Moreover, it is not a good test. A DDoS from, say, home will be bottlenecked by your ISP and the Internet backbone (1-10 Mbps). A better test is off the router interface (100-1000 Mbps).

Second, use a distributed test. A distributed test is a common practice when stress-testing. It required to get a D in the DDoS. Alright, that was a bad joke. The point is that you want to remove individual device differences from affecting the test, such as a bottleneck within the OS or the testing application. My rule of thumb is 5:1. So if you are testing one router interface at 1 Gbps, you would want to send 5 Gbps of data via five separate computers.

Third, use a combination of traditional administration tools and the tools in use for DDoS. Stress-test both the network layer and the HTTP layer of the application. If I were to launch a DDoS test, I would likely go with iperf, loic, and hoic. Check also for tools specific to the web server, such as ab for Apache. Put together a test plan with test scripts and repeat this plan in a consistent fashion.

Forth, test with disposable systems. The best test machine is one with a basic installation of the OS, the test tools, and the test scripts. This minimizes variables in the test. Also, while rare, it is not unheard of for tools like loic and hoic to be bundled with malicious software. Once the test is complete, the systems used for testing should be re-imaged before returned to service.

Let’s summarize by looking at a hypothetical example. Assume we have two Internet routers, two core routers, two firewalls, and then two front-end web servers. All are on 1 Gbps network connections. I would re-image five notebooks with a base OS and the DDoS tools. With all five plugged into the network switch on the Internet routers, I would execute the DDoS test and collect the results. Then repeat the exact same test (via script) on the core routers network, on the firewall network, and on the web server network. The last step is to review the entire data set to identify bottlenecks and make recommendations for securing the network against DDoS.

That's it. These are simple considerations that reduce the risk and increase the effectiveness of DDoS testing.


Security | Systems Engineering

Incog: past, present, and future

By wolfgang. 10 January 2013 12:30

I spent last summer tinkering with covert channels and steganography. It is one thing to read about a technique. It is quite another to build a tool that demonstrates a technique. To do the thing is to know the thing, as they say. It is like the art student who spend time duplicating the work of past masters.

And what did I duplicate? I started with the favorites: bitmap steganography and communication over ping packets. I did Windows-specific techniques such as NTFS ADS, shellcode injection via Kernel32.dll, mutexes, and RPC. I also replicated Dan Kaminsky’s Base32 over DNS. Then I tossed in a few evasion techniques like numbered sets and entropy masking.

Incog is the result of this summer of fun. Incog is a C# library and a collection of demos which illustrate these basic techniques. I released the full source code last fall at GrrCon. You can download Incog from GitHub.

If you would like to see me present on Incog, including my latest work with new channels and full PowerShell integration, I am up for consideration for Source Boston 2013.

Please vote here: https://www.surveymonkey.com/s/SRCBOS13VS

This year SOURCE Boston is opening up one session to voter choice. Please select the session you would like to see at SOURCE Boston 2013. Please only vote once (we will be checking) and vote for the session you would be the most interested in seeing. Voting will close on January 15th.

OPTION 5: Punch and Counter-punch with .Net Apps, J Wolfgang Goerlich, Alice wants to send a message to Bob. Not on our network, she won’t! Who are these people? Then Alice punches a hole in the OS to send the message using some .Net code. We punch back with Windows and .Net security configurations. Punch and counter-punch, breach and block, attack and defend, the attack goes on. With this as the back story, we will walk thru sample .Net apps and Windows configurations that defenders use and attackers abuse. Short on slides and long on demo, this presentation will step thru the latest in Microsoft .Net application security.


Cryptography | Out and About | Security

Write-up of the 29c3 CTF "What's This" Challenge

By wolfgang. 8 January 2013 17:07

Subtitled: "How to capture a flag in twelve easy days"

The 29th Chaos Communication Congress (29C3) held an online capture the flag (CTF) event this year. There were several challenges, which you can see at the CTF Time page for the 29c3 CTF. I spent most of the time on the "What's This" challenge. The clue was a USB packet capture file named what_this.pcap.

The first thing we did was run strings what_this.pcap and look at the ASCII and Unicode strings in the capture. ASCII: CASIO DIGITAL_CAMERA 1.00, FAT16, ACR122U103h. Unicode: CASIO QV DIGITAL, CASIO COMPUTER, CCID USB Reader.

The second thing we did was to open the capture in Wireshark 1.84. (Using the lastest version of Wireshark is important as the USB packet parser is still being implemented.) We knew Philip Polstra had covered USB forensics in the past, such as at GrrCon, and Philip pointed us to http://www.linux-usb.org/usb.ids for identifying devices. We see a Genesys Logic USB-2.0 4-Port HUB (frame.number==2), a Linux Foundation USB 2.0 root hub (frame.number==4), Holtek Semiconductor Shortboard Lefty (frame.number==32, 42), a Casio Computer device (frame.number==96, 106), and another Casio Computer device (frame.number==1790).

Supposition? The person is running Linux with a keyboard, Casio camera, and smart card (CCID) reader attached over USB. A Mifare Classic card (ACR122U103) is swiped on the CCID reader. The camera is mounted (FAT16) and a file or files are read from the device.

Next, we extracted the keystrokes. I had previously written a simple keystroke analyzer for the CSAW CTF Qualification Round 2012. This simply took the second byte in the USB keyboard packets (URB_INTERRUPT) and added 94. This meant the alphabetical characters were correct, however, all special characters and linefeeds were lost. The #misec 29c3 CTF captain, j3remy, passed along a lookup table. Using this lookup table, we found the following keystrokes:

mmouunt t vfaat //ddev/ssdb1 /usb
ccd usb
llss —l
fiille laagg
ccat flaag \ aespipe -p 3 -d 3,,, nffs-llisst \c-llisst \ grrep uuid \ cut =-d -f 10\ dd sbbs=113 couunnt=2

There are a number of problems with this method of analyzing keystrokes. First, when the key is held down too long, we get multiple letters (dmeesg). Second, special keys like shift and backspace are ignored. I redid my parser to read bytes 1, 2, and 3. The first byte in a keyboard packet is whether or not shift is depressed. The second byte is the character (including keys like enter and backspace). The third byte is the error code for repeating characters. Using this information, I mapped the HID usage tables to Microsoft's SendKeys documentation and replayed the packet file into Notepad.

mount -t vfat /dev/sdb1 usb
cd usb
ls -l
file  lag
cat flag | aespipe -p 3 -d 3<<< "`nfc-list | grep UID | cut -d  " " -f 10-| dd bs=13 count=2`"

Supposition? The person at the keyboard plugged in the Casio camera and mounted it to usb. He listed the folder contents, then scanned for the Mifare Card (nfc-list lists near field communications devices via ISO14443A). Once confirmed, he read the flag from the camera and decrypted it via AES 128-bit encryption in CBC mode (man aespipe). The passcode was the UID of the Mifare Card in bytes (nfc-list | grep | cut | dd). To find the flag, we need both the UID and the flag file.

The hard work of finding the UID was done by j3remy. He followed the ACR122U API guide and traced the calls/responses. For example, frame.number==1954 reads Data: ff 00 00 00 02 d4 02, or get (ff) the firmware (2d d4). The response comes in frame.number==1961 Data: 61 08, 8 bytes coming with the firmware. Then frame.number==1966, Data: ff c0 00 00 00 08, get (ff) read (c0) the 8 bytes (08). And the card reader returns the firmware d5 03 32 01 06 07 90 00 in frame.number==1973. j3remy likewise parsed the communications and found frame.number==3427 which reads: d54b010100440007049748ba3423809000

d5 4b == pre-amble
01 == number of tag found
01 == target number
00 44 == SNES_RES
07 == Length of UID
04 97 48 ba 34 23 80 == UID
90 00 == Operation finished

The next step was to properly format the UID as the nfc-list command would display it. This took some doing. Effectively, there are 4 blank spaces before ATQA, 7 blank spaces before UID, and 6 spaces before SAK. There is one space after : and before the hexadecimal value. Each hexadecimal value is double-spaced. With that in mind, we created an nfc-list.txt file:

    ATQA (SENS_RES): 00  44
       UID (NFCID1): 04  97  48  ba  34  23  80
      SAK (SEL_RES): 00

Determining the spacing took some time. Once we had it, we could run the cat | grep | dd command and correctly return 26 bytes of ASCII characters.

$ echo "`cat nfc-list.txt | grep UID | cut -d  " " -f 10-| dd bs=13 count=2`"
2+0 records in
2+0 records out
26 bytes (26 B) copied, 6.2772e-05 s, 414 kB/s
04  97  48  ba  34  23  80

To recap: we have the UID, we have correctly converted the UID to a AES 128-bit decryption key, and we are now ready to decrypt the file. How to find the file, though? Rather than reverse engineering FAT16 over USB, we took a brute force approach. We exported all USB packets with data into files named flagnnnn (where nnnn was the frame.number). We then ran the following script:

for f in $FILES
    echo -e "\n Processing $f file... \n"
    cat $f | aespipe -p 3ls -d 3<<< "`cat nfc-list.txt | grep UID | cut -d ' ' -f 10-| dd bs=13 count=2`"

There it was. In the file flag1746, in the frame.number==1746, from the Casio camera (device 26.1) to the computer (host), we found a byte array that decrypted properly to:


What else, indeed? Well played.

Special thanks to Friedrich (aka airmack) and the 29c3 CTF organizers for an enjoyable challenge. Thanks for j3remy for captaining the #misec team and helping make sense of the ACR122 API. Helping us solve this were Philip Polstra and PhreakingGeek. It took a while, but we got it!


Security | Systems Engineering

Happy New Year 2013

By wolfgang. 1 January 2013 06:19

We did it. We beat the Mayans. Welcome to 2013.

Read less, do more. That is my New Year's Resolution. It might sound cynical or uninformed. After all, a good book can tell you a good deal about anything. Moreover, I have been and continue to be a proponent of continued learning. And yet I think it is time to put down the books and get to work.

There are many reasons.

The first reason is the wide gulf between reading about a thing and doing a thing. That first dawned on me while shivering in the mountains, wearing wet clothes and lacking sufficient food. Hey, I read about hiking! Why is this so hard? A more recent example was an OWASP hacker challenge that I completed on cross-site scripting. I read about cross-site-scripting. I know this. It took me three hours. I mentioned it to the founder of OWASP Detroit who, after much prodding, revealed how long it took him. Five minutes. The difference between doing and reading is wide and deep.

The second reason is found in the old saying: writers write. They don't read books about writing. They don't attend workshops about writing. They don't talk about writing. You can readily identify a group of people in writing or any field who are procrastinating by reading, talking, planning, preparing. But not doing. Writers write. Coders code. Security professionals secure.

I have therefore queued up some exciting projects for this year. (Read that Wolfgang exciting, not normal exciting, which is an entirely different form of excitement.)

Professionally, my team and I are architecting and purchasing equipment for our third generation of private cloud computing. We are also revamping our business intelligence platform and adding self-service features. 

Personally, I have two development projects in the queue. I released #incog last year for covert channels and steganography. This year, I will release an update adding new channels and a PowerShell interface. I am also working on a hacker capture-the-flag toolset called Botori. I plan to release Botori mid-year along with several example CTF challenges.

Collaboratively, I have been invited to work on the PoshSec project. PoshSec is a PowerShell Information Security project started by Will Steele, who sadly passed away this past Christmas from terminal cancer. The project lead is Matt Johnson, and other members of the team include Rich Cassara. I look forward to working with these sharp people and contributing to Will Steele’s legacy.

As I said, I will be doing more in 2013. There is lots to do and little time. But before wrapping up this article, let’s take a look back.

2012: A Year in Review

  • This blog celebrated its tenth anniversary. The website saw its highest readership to date: 35,361 unique visitors and 46,853 page views in 2012. 
  • I did two case studies: a Microsoft case study on my firm’s second generation private cloud, and another case study on our new reporting SaaS.
  • I was mentioned in the press a few times on topics like cloud computing, risk management, and DevOps. 
  • I spoke at a few different conferences and user groups on topics like -- you guessed it! -- cloud computing, risk management, and DevOps. I also did a handful of talks on covert channels and steganography.
  • I volunteered for BSides Detroit and collaborated on everything from sponsors to speakers, as well as recording a 23-episode podcast series for the conference.
  • I was recognized with an InfoWorld 2012 Technology Leadership Award for my firm's private cloud and DevOps initiatives.
  • And I read a lot of books.

Done. Now, onward!



    Log in