J Wolfgang Goerlich's thoughts on Information Security
Converge Detroit Podcasts

By wolfgang. 21 July 2015 15:43

We did a few podcasts over the Converge Detroit conference. Check them out here:

IT in the D -- Live Broadcast: Converge 2015 Security Conference. Ever had a conversation with a guy who compromised bank security ... in Beirut? How about someone who’s managed to compromise physical security all over the world ... just because scanning and getting into servers is too boringly easy? Know anything about a group that’s out there dedicated to teaching kids about computer security in a way they’ll actually want to learn? Read and listen on, friends ... read and listen on.

Hurricane Labs InfoSec Podcast -- Don’t Bother Trusting, Verify Everything. This podcast was recorded by the Hurricane Labs crew, and special guest Wolfgang Goerlich, at the 2015 Converge Conference. Topics of discussion (and witty banter) include: FBI anti-encryption rhetoric; the Hacking Team hack; Google's social responsibility; and more. Converge and BSides Detroit were fantastic - if you didn't get the chance to make it out this year, you can still view the video presentation recordings here: Converge 2015 Videos. Thanks to Wolf and all the sponsors, volunteers, speakers and everyone who made these conferences possible! 

PVCSec -- Live! At Converge Detroit. Ed & I enjoyed talking with a fantastic audience at Converge Detroit 2015 yesterday. Everyone was in fine voice. Ed & Paul embraced Converge Detroit’s invitation to podcast LIVE! from the event on the campus of Wayne State University in the Arsenal of Democracy, Detroit Michigan U.S. of A. Thanks again to the event, the sponsors, the volunteers, and of course all of those who attended. We had a blast and can’t wait for next year!

Tags:

Out and About | Security

Securing The Development Lifecycle

By wolfgang. 6 March 2015 10:31

One line. Ever since the Blaster worm snaked across the Internet, the security community has known that it takes but one line of vulnerable code. Heartbleed and iOS Goto Fail made the point again last year. Both were one line mistakes. Even the Bash Shellshock vulnerability was made possible by a small number of lines of code.

To manage the risk of code-level vulnerabilities, many organizations have implemented security testing in their software development lifecycle. Such testing has touch-points in the implementation, verification, and maintenance phases. For example, an organization might ...

Read the rest at http://content.cbihome.com/blog/securing-development-lifecycle

Tags:

Application Security | Security

Attacking hypervisors without exploits

By wolfgang. 3 January 2014 16:58

The OpenSSL website was defaced this past Sunday. (Click here to see a screenshot from @DaveAtErrata on Twitter.) On Wednesday, OpenSSL released an announcement that read: "Initial investigations show that the attack was made via hypervisor through the hosting provider and not via any vulnerability in the OS configuration." The announcement led to speculation that a hypervisor software exploit was being used in the wild.

Exploiting hypervisors, the foundation of infrastructure cloud computing, would be a big deal. To date, most attacks in the public cloud are pretty much the same as the traditional data center. People make the same sort of mistakes and missteps, regardless of hosting environment. A good place to study this is the Alert Logic State of Cloud Security Report, which concludes "It’s not that the cloud is inherently secure or insecure. It’s really about the quality of management applied to any IT environment."

Some quick checking showed OpenSSL to be hosted by SpaceNet AG, which runs VMware vCloud off of HP Virtual Connect with NetApp and Hitachi storage. It was not long before VMware issued a clarification.

VMware: "We have no reason to believe that the OpenSSL website defacement is a result of a security vulnerability in any VMware products and that the defacement is a result of an operational security error.” OpenSSL then clarified: "Our investigation found that the attack was made through insecure passwords at the hosting provider, leading to control of the hypervisor management console, which then was used to manipulate our virtual server."

No hypervisor exploit, no big deal. Right? Wrong.

Our security controls are built around owning the operating system and hardware.  See, for example, the classic 10 Immutable Laws of Security. "Law #2: If a bad guy can alter the operating system on your computer, it's not your computer anymore. Law #3: If a bad guy has unrestricted physical access to your computer, it's not your computer anymore." Hypervisor access lets the bad guy do both. It was just one wrong password choice. It was just one wrong networking choice for the management console. But it was game over for OpenSSL, and potentially any other customer hosted on that vCloud.

It does not take a software exploit to lead to a breach. Moreover, the absence of exploits is not the absence of lessons to be learned. Josh Little (@zombietango), a pentester who I work with, has long said "exploits are for amateurs". When Josh carried out an assignment on a VMware shop recently, it was using a situation very much like the one at SpaceNet AG: he hopped onto the hypervisor management console. The point is to get in quickly, quietly, and easily. The technique is about finding the path of least resistance. 

Leveraging architectural decisions and administration sloppiness is valid attack technique. Scale and automation, that is what changes with cloud computing. It is this change that magnifies otherwise small mistakes by IT operations and makes compromises like OpenSSL possible. Low quality IT management becomes even worse.

And cloud computing's magnification effect on security is a big deal.

Tags:

Security | Virtualization

Why You Should Work in Information Security

By wolfgang. 13 November 2013 08:19

Rasmussen College reached out for advice on why information security is a great field to be in. My response is below. Click through to read more thoughts.

Expert Advice on Why You Should Work in Information Security ... NOW
http://www.rasmussen.edu/degrees/technology/blog/expert-advice-why-work-in-information-security/

1. Working in information security is exciting, challenging and never-ending

"Information security is new unexplored territory ... and this creates exciting and challenging work," says J. Wolfgang Goerlich, vice president of consulting at VioPoint.

Information security professionals work on teams to develop tactics that will help find and solve unauthorized access as well as potential data breaches. A crucial part of the job in information security is keeping companies from having to deal with unwanted exposure.

The best information security teams, Goerlich says, are those that provide "consistent mentoring and cross-training." He says professionals in this field must be constantly learning and sharing what they know.

"As the technology is shifting and the attacks are morphing, the career effectively is one of life-long learning," Goerlich says.

Tags:

Security

Building a better InfoSec community

By wolfgang. 15 July 2013 16:56

How can we build a stronger community of speakers and leaders? I have a few thoughts. In some ways, this is a response to Jericho’s Building a better InfoSec conference post. I disagree with a couple of Jericho’s points. To be fair, he brings more experience in both attending conferences and reviewing CFPs. For that reason and others, I have a slightly different perspective.

Engagement should be encouraged, heckling discouraged. Hecklers and those looking to one-up the speaker should be run out of the room. But engagement, engagement is something different: sharing complementary knowledge, and pointing out ideas. Engagement is about raising everyone in the talk.

At the BSides Detroit conference, during OWASP Detroit meetings, and during MiSec talks, we get a lot of engagement. Rare is the speaker that goes ten or fifteen minutes without being interrupted. It is a good thing. If the audience has something to add, let’s get it in the discussion. If the speaker says something incorrect, let’s address it right off. In fact, many talks directly solicit feedback and ideas from the audience. Engagement, to me, is key to building a stronger local community.

Participation should be encouraged, waiting for rockstar status discouraged. I have seen people sit on the sidelines waiting until they had just enough experience, just enough content, just enough mojo to justify being a speaker. The only justification a community needs to accept a speaker is that the speaker is committed to putting the time into giving a great talk.

At local events, we have a mixed audience. I believe that every one of us has a unique perspective, a unique skillset, and a unique knowledge. True, a pen-tester with 20 years of experience might not learn anything from someone with only a few years. Yet not all of our audience are pen-testers. It is the commitment to put together a good talk, practice it, research past talks of similar nature, and solicit feedback that marks someone as a good presenter.

Let me give an example. At last week’s MiSec meeting, Nick Jacob presented on PoshSec. Nick (@mortiousprime) is interning with me this summer and has a total of ten weeks of paid InfoSec experience under his belt. Don’t get me wrong. Nick comes from EMU’s program and has done a lot of side work. But a 20 year veteran, Nick is not.

Nick’s talk was on applying PowerShell to the SANS Critical Security Controls. He structured his talk with engagement in mind. He covered a control and associated scripts for five or ten minutes, and then turned it over to the audience for feedback. What would the pen-testers in the room do to bypass these controls? What would the defenders do to counter the attacks? All in all, the presentation went over well and everyone left with new information and ideas. That is how to do it.

In sum, the better InfoSec communities remove the concerns speakers have about being heckled and being inadequate. A better community stresses engagement and participation. Such communities do so in ways that open up new opportunities for new members while strengthening the knowledge among those who have been in the profession a long time.

That is the trick to building a better InfoSec community.

Tags:

Security

Incident Management in PowerShell: Containment

By wolfgang. 12 June 2013 11:00

Welcome to part three of our Incident Management series. On Monday, we reviewed preparing for incidents. Yesterday, we reviewed identifying indicators of compromise. Today’s article will cover containing the breach.

The PoshSec Steele release (0.1) is available for download on GitHub.

At this stage in the security incident, we have verified a security breach is in effect. We did this by notifying changes in the state and behavior of the system. Perhaps group memberships have changed, suspicious software installed, or unrecognized services are now listening on new ports. Fortunately, during the preparation phase we integrated the system into our Disaster Recovery plan.

Containment
There are two concepts behind successful containment. First, use a measured response in order to minimize the impact on the organization. Second, leverage the disaster recovery program and execute the runbook to maintain services.

When a breach is identified, kill all services and processes that are not in the baseline (Stop-Process). Oftentimes attackers have employed persistence techniques, so we must setup the computer to prevent new processes from spawning (see @obscuresec’s Invoke-ProcessLock script). This stops the breach in progress and prevents the attacker from continuing on this machine.

We now need to execute a disaster recovery runbook to resume services. Data files can be moved to a backup server using file replication services (New-DfsnFolderTarget). Services and software can be moved by replaying the build scripts on the backup server. The success metric here is minimizing downtime and data loss, thereby minimizing and potentially avoiding any business impact.

We can now move onto the network layer. If necessary, QoS and other NAC services can be set during the initial transfer. We then can move the compromised system onto a quarantine network. This VLAN should contain systems with the forensics and imaging tools necessary for the recovery process.

The switch commands for QoS, NAC, and VLAN vary by manufacturer. It is a good idea to determine what these commands are and how to execute them. A better idea is to automate these with PowerShell, leveraging the .Net Framework and libraries like SSH.Net and SharpSSH.

For more information about the network side of inicident containment, please see Mick Douglas’s talk: Automating Incident Response. The concepts Mick discusses can be executed manually, automated with switch scripts, or automated with PowerShell and SSH libraries.

To summarize Containment, we respond in a measured way based on the value the system delivers to the organization. Containment begins with disaster recovery: fail-over the services and data and minimize the business impact. We can then move the affected system to a quarantine network, and move onto the next stage: Eradication. The value PowerShell delivers is in automating the Containment process. When minutes count and time is expensive, automation lowers the impact of a breach.

This article series is cross-posted on the PoshSec blog.

Tags:

Incident Response | Security

Incident Management in PowerShell: Preparation

By wolfgang. 10 June 2013 11:30

We released PoshSec last Friday at BSides Detroit. We have named v0.1 the Steele release in honor of Will Steele. Will recognized PowerShell’s potential for improving an organization’s security posture early on. Last year, Matt Johnson -- founder of the Michigan PowerShell User Group -- joined Will and launched the PoshSec project. Sadly, Will passed away on Christmas Eve of 2011. A number of us have picked up the banner.

The Steele release team was led by Matt Johnson and included Rich Cassara (@rjcassara), Nick Jacob (@mortiousprime), Michael Ortega (@securitymoey), and J Wolfgang Goerlich (@jwgoerlich). You can download the code from GitHub. In memory of Will Steele.

This is the first of a five part series exploring PowerShell as it applies to Incident Management.

So what is Incident Management? Incident Management is a practice comprised of six stages. We prepare for the incident with automation and application of controls. We identify when an incident occurs. Believe it or not, this is where most organizations fall down. If you look at the Verizon Data Breach Investigations Report, companies can go weeks, months, sometimes even years before they identify that a breach has occurred. So we prepare for it, we identify it when it happens, we contain it so that it doesn’t spread to other systems, and then we clean up and recover. Finally, we figure out what happened and apply the lessons learned to reduce the risk of a re-occurrence.

Formally, IM consists of the following stages: Preparation, Identification, Containment, Eradication, Recovery, and Lessons Learned. We will explore these stages this week and examine the role PowerShell plays in each.

Preparation
The key practice in the Preparation stage is leveraging the time that you have on a project, before the system goes live. If time is money, the preparation time is the cheapest time.

Our most expensive time is later on, in the middle of a breach, or in a disaster recovery scenario. The server is in operation, the workflow is going on, and we are breaking the business by having that server asset unavailable. There is a material impact to the organization. It is very visible, from our management up to the CEO level. Downtime is our most expensive time.

The objective in Preparation is to bank roll as much time as possible. We want to ensure, therefore, that extra time is allocated during pre-launch for automating the system build, hardening the system, and implementing security controls. Then, when an incident does occur, we can identify and recover quickly.

System build is where PowerShell shines the brightest. As the DevOps saying goes, infrastructure is code. PowerShell was conceived of as a task framework and admin automation tool, and it can be used to script the entire Windows server build process. Take the time to automate the process and, once done, we place the build scripts in a CVS (code versioning software) to track changes. When an incident occurs, we can then pull on these scripts to reduce our time to recover.

Once built, we can harden to increase the time it will take an attacker to breach our defense. CIS Security Benchmarks (Center for Internet Security) provides guidance on settings and configurations. As with the build, the focus is on scripting each step in hardening. And again, we will want to store these scripts in a CVS for ready replays during an incident.

Finally, we implement security controls to detect and correct changes that may be indicators of compromise. For a breakdown of the desired controls, we can follow theCSIS 20 Critical Security Controls matrix. The Steele release of PoshSec automates (1) Inventory of Authorized and Unauthorized Devices; (2) Inventory of Authorized and Unauthorized Software; (11) Limitation and Control of Network Ports, Protocols, and Services; (12) Controlled Use of Administrative Privileges; and (16) Account Monitoring and Control.

The bottom line is we baseline areas of the system that attackers will change, store those baselines as XML files in a CVS, and check regularly for changes against the baseline. We use the Export-Clixml and Compare-Object cmdlets to simplify the process.

At this point in the process, we are treating our systems like code. The setup and securing is completed using PowerShell scripts. The final state is baselined. The baselines, along with the scripts, are stored in a CVS. We are now prepared for a security incident to occur.

Next step: Identification
Tomorrow, we will cover the Identification stage. What happens when something changes against the baseline? Say, a new user with a suspicious name added to a privileged group. Maybe it is a new Windows service that is looking up suspect domain names and sending traffic out over the Internet. Whatever it is, we have a change. That change is an indicator of compromise. Tomorrow, we will review finding and responding to IOCs.

This article series is cross-posted on the PoshSec blog.

Tags:

Incident Response | Security

Software vulnerability lifecycle

By wolfgang. 29 April 2013 12:40

How long does it take to go from excitement to panic? Put differently, how long is the vulnerability lifecycle?

We know the hardware side of the story. Moore's law predicts transistors density double every 18 months. On the street, factoring in leasing, this means computing power jumps up every 36 months.

Now let's cover the software side of the story. It takes a couple of years for software ideas to be developed and to reach critical mass. We see a 24-month development cycle. Add another 6-12 months for the software to become prevalent and investigated by hackers, both ethical and not.

I made a prediction this past weekend. Some at BSides Chicago were calling this Wolf's Law. Not me. I checked the video replay. Nope. It is simply a hunch I have. Start the clock when developers get really excited about software, tools, or techniques. Stop the clock when a hacker presents an attack at a well-known conference.

Wolf's Hunch says it takes 36 month's to go from excitement to panic.

As a security industry, the trick is to get ahead of the process. How could we engage the developers at months 1-12? One way might be to attend dev conferences. Here is how I put it at BSides Chicago:

"You know what is scary? Right now, as we are all in here talking, there is a software developer conference going on. Right now. There are a whole bunch of software developer guys talking about the next biggest thing. 36 months from now, what the developers are really excited about, we will be panicking about."

I checked the news this morning. During this past weekend, NY Disrupt was in full swing. At approximately the time I was speaking, developers were hard at it in the Hackathon. Lots of people are excited about the results, such as Jarvis:

"Jarvis works, using APIs provided by Twilio, Weather Underground and Ninja Blocks to help you control your home and check the current conditions, headlines and what's making news, and more, all just by dialing a number from any telephone and issuing voice commands, It's like a Siri, but housed on Windows Azure and able to plug into a lot more functionality.”

Uh huh. A Jarvis. Voice control. Public APIs. What could possibly go wrong?

Will my hunch play out? Check back here in May 2016. My money is on a story about a rising infosec star who is demonstrating how home APIs can be misused.

Tags:

Security

Out and About: Great Lakes InfraGard Conference

By wolfgang. 25 February 2013 15:30

I am presenting a breakout session at this year's Great Lakes InfraGard Conference. Hope to see you there.

Securing Financial Services Data Across The Cloud: A Case Study

We came from stock tickers, paper orders, armored vehicles, and guarded vaults. We moved to data bursts, virtual private networks, and protocols like Financial Information eXchange (FIX). While our objective remains the same, protect the organization and protect the financial transactions, our methods and technologies have radically shifted. Looking back is not going to protect us.

This session presents a case study on a financial services firm that modernized its secure data exchange. The story begins with the environment that was developed in the previous decade. We will then look at high-level threat modelling and architectural decisions. A security-focused architecture works at several layers and this talk will explore them in depth; including Internet connections, firewalls, perimeters, hardened operating systems, encryption, data integration, and data warehousing. The case study concludes with how the firm transformed the infrastructure, layer by layer, protocol by protocol, until we were left with a modern, efficient, and security-focused architecture. After all, nostalgia has no place in financial services data security.

Tags:

Out and About | Security

Surviving the Robot Apocalpyse

By wolfgang. 28 January 2013 11:14

I am on the latest BSides Chicago podcast episode: The Taz, the Wolf, and the exclusives. Security Moey interviewed me about a new talk I am developing for Chicago, titled Surviving the Robot Apocalypse.

The inspiration comes from Twitter. Tazdrumm3r once said, "@jwgoerlich <~~ My theory, he’s a Terminator from the future programmed 4 good 2 rally & lead the fight against SkyNet, MI branch." A few weeks back, we were discussing the robotics articles I wrote for Servo magazine and some 'bots I built with my son. To which, Infosec_Rogue said, "@jwgoerlich not only welcomes our robot overlords, he helped create them."

I can roll with that. Let's do it.

The goal of this session is to cover software security and software vulnerabilities in an enjoyable way. Think Naked Boulder Rolling and Risk Management. Unless, of course, you didn't enjoy Naked Boulder Rolling. In that case, imagine some other talk I gave that you enjoyed. Or some other talk someone else gave that you enjoyed. Yeah. Pick one. Got it? Surviving is like your favorite talk, only for software security principles and their applicability to InfoSec.

I hope to see you in Chicago.

Surviving the Robot Apocalpyse

Abstract. The robots are coming to kill us all. That, or the zombies. One way or the other, humanity stands on the brink. While many talks have focused on surviving the zombie apocalypse, few have given us insights into how to handle the killer robots. This talk seeks to fill that void. By exploring software security flaws and vulnerabilities, we will learn ways to bypass access controls, extract valuable information, and cheat death. Should the unthinkable happen and the apocalypse not come, the information learned in this session can also be applied to protecting less-than-lethal software. At the end of the day, survival is all about the software.

Tags:

Security

    Log in