I have the great opportunity to spend time with CSOs and IT executives to understand their cybersecurity concerns and help them map out a strategy for success. An increasingly common question I’ve been hearing is, “Does my organization need a threat intelligence team?” Adding threat intelligence capabilities to your organization can be valuable, with their ability to hunt for advanced attacks; profile never-before-seen malware, campaigns or adversaries; and really think like an attacker. However, the number of organizations with their own dedicated threat intelligence team is quite low today, with some very good reasons behind this trend.

The fact is that in-house threat intelligence teams are rare because of the difficulty and cost of identifying and hiring qualified staff. In the grand scheme of things, cybersecurity itself is a relatively new industry, and the number of highly technical threat analysts is still low. The fact is, the number of open security jobs is far greater than the number of candidates, something many of you experience on a daily basis when trying to fill your open positions. For example, most universities don’t offer a cybersecurity major, and many people currently pursuing computer science fields are not aware of the potential opportunity in front of them.

Today’s threat intelligence analysts learned what they know through hands-on work in related computing fields and/or years of experience on the IT frontlines. With threat intelligence analysts in short supply, the demand for their services keeps their salaries high and beyond the budgets of all but the largest organizations.

So my answer to the threat intelligence team question mentioned above usually consists of several more questions:  What is your organization’s current security posture? Are you automatically preventing attacks before they can breach your network? Do you have an information security team, and do they have a proven workflow in place for handling a successful cyberattack? How are you protecting your organization’s intellectual property and high-value assets? Is your network properly segmented? If the answer to any of those questions is “no,” my advice to the customer is to get those issues addressed first, before they even begin to ponder the need for a dedicated threat intelligence team.

This isn’t to say that an organization doesn’t need threat intelligence; good intelligence plays an important role in defending against attacks. But for many organizations, the best way to get value from threat intelligence is by ensuring their security platforms can natively consume and enforce protections derived from it. When you exist in a world where attacks are generated at machine scale, you must ensure you can automate as much of the creation, sharing, ingestion and application of threat intelligence as possible. The desired end state is preventing the majority of attacks, identifying targeted threats, and ensuring your security staff has easy access to the intelligence and context to prioritize the most critical attacks for immediate action. Inherent in this is the belief that more data doesn’t always yield better security: you need the right intelligence, delivered in a simple way.

Once you have established a good baseline for your security posture, I would advise you to start considering how to build a threat intelligence team now. It will take time to identify the right people and secure the support you need to build the team. Think about the following guidelines as you move down this path:

Support From the C-Suite

The cost involved in building a threat intelligence team is so great that most boards of directors will need assurances that the work being done is truly necessary. I would advise any CSOs considering building a threat intelligence team to make sure they can translate the benefits of their threat intelligence team’s research in a way that clearly communicates its value to the board. For instance, you want to report out threats targeting your organization and industry, and make the link between highly technical indicators of compromise and business metrics. If the board isn’t able to understand the impact that not having a threat intelligence team will have on the bottom line, they’re less likely to see it as worth the cost.

Cybersecurity and Threat Intelligence Are Different Disciplines

Don’t expect to plug a cybersecurity specialist into the role of threat intelligence analyst, as the jobs require different skill sets. An example I use to illustrate the difference is scientists and engineers. Scientists, like threat intelligence analysts, spend much of their time researching a subject over time to learn its behavior, motivation and technique. They then publish their findings so others can apply that research in a practical way. Engineers, like cybersecurity specialists, apply the knowledge gained by scientists to the real world by building machines or writing code to produce the desired effect and then maintaining that machine or code over time. Be aware of the difference when staffing up your threat intel team. Not everyone in cybersecurity is meant to be a threat analyst and vice versa.

Good Intel Is Hard to Find

This is a topic I’ve addressed before, but there are a lot of different threat intelligence feeds available today and each of them claims to provide the best, most comprehensive intel on the latest cyberthreats. In an effort to make sure they don’t miss hearing about the latest threat, threat intelligence teams will subscribe to multiple intelligence feeds. But in the intelligence game, it’s quality, not quantity that counts. The value of any threat intel is in its applicability to your network. For example, if you’re organization is responsible for cybersecurity at a large manufacturing facility, you need to be concentrating your threat intelligence spend on feeds that specifically track manufacturing cyberthreats. This will allow you to focus on the threats most likely to impact the organization, and it will free up the budget spent on unnecessary feeds for better use elsewhere.

view counter

Scott Simkin is a Senior Manager in the Cybersecurity group at Palo Alto Networks. He has broad experience across threat research, cloud-based security solutions, and advanced anti-malware products. He is a seasoned speaker on an extensive range of topics, including Advanced Persistent Threats (APTs), presenting at the RSA conference, among others. Prior to joining Palo Alto Networks, Scott spent 5 years at Cisco where he led the creation of the 2013 Annual Security Report amongst other activities in network security and enterprise mobility. Scott is a graduate of the Leavey School of Business at Santa Clara University.

Previous Columns by Scott Simkin:


SecurityWeek RSS Feed

Back in January, I wrote one of my most popular posts ever: “Why you don’t need an RFID-blocking wallet.” As the title suggests, I argued that it’s a waste of money to buy a wallet with special shielding to protect your chipped credit card from RFID scanners wielded by street criminals seeking to snatch your credit card number.

Since then, in true internet tradition, I’ve been called an idiot by dozens of people and received emails from RFID vendors saying I’m a disgrace—the latter begging me to tell people they also need a Faraday bag for their cellphones. (Tip: If you don’t want anyone tracking you via GPS, turn off your cellphone’s GPS feature.) I’ve also been emailed by people who are 100 percent sure, without any real evidence, that they were the victims of RFID-scanning criminals.

[ Why you don’t need an RFID-blocking wallet. | Watch out for 11 signs you’ve been hacked—and learn how to fight back, in InfoWorld’s PDF special report. | Discover how to secure your systems with InfoWorld’s Security Report newsletter. ]

Part of the confusion stems from the fact that many, if not most, people now have chip-and-pin cards—you can see the shiny chip right on the card, which you stick into a card reader (instead of sliding the card through). People assume chip-and-pin cards are vulnerable to scanning, but they’re not. RFID cards are contactless—and very likely you don’t have one.

Still waiting

Every story about the risks of RFID scanners features a white hat hacker showing it can be done, but not a shred of evidence has emerged that bad guys are sitting on popular corners wirelessly stealing credit card numbers.

I still haven’t heard of a single case of real-life RFID scanning criminality. Even the wallet vendors’ websites have no verifiable links or testimonies from actual victims. To be honest, at this point, I’m surprised an RFID-protection vendor hasn’t paid a criminal to get caught, so they could point to a real-life story.

Plenty of “believers” have told me it’s obvious why the real RFID scanning criminals haven’t been caught yet—it’s a wireless crime. In their world, it’s impossible to catch wireless criminals. Never mind that we’ve been successfully tracking criminals wirelessly and prosecuting them for decades. If there were a huge contingent of RFID criminals, we would eventually catch some, and it would be such big news that it would spread like wildfire across the internet.

If someone stole a credit card number using an RFID scanner, created a counterfeit card, and got busted, as part of the plea agreement the accused would reveal exactly how the crime had been committed. This plea would have details about the scanner, the victims, and how much money had been stolen. That’s how our justice system works. Where are those stories?

Even the popular debunking website has commented on RFID crime, giving it a “Mixture” truth rating. Why “Mixture”? Because it can’t find any real evidence RFID theft is occurring, although it debunks at least one news source that claimed to show a real RFID criminal.

Make no mistake—criminals who want to make money know about this supposedly easy crime. Hacker researchers have been writing about the risks since RFID-enabled items first came out. Here’s an article from industry luminary Bruce Schneier from 2006.

Not cost efficient

Given all this, you might be surprised to learn I think that RFID-scanning criminals do exist. There are nearly 100 videos on the internet from all over the world showing good guy hackers demonstrating how it can be done. It’s a potential risk. But because the real-life occurrence is so rare, it’s a small risk.

Why? Because it’s not cost-efficient. Real-life criminals steal credit card numbers all the time, but they don’t sit on corners for hours hoping to catch a few dozen card numbers. They steal hundreds of thousands of cards and resell them for cheap to anyone who wants to buy them. In 10 minutes, any criminal with enough smarts to even know what RFID scanning is can spend a $ 100 to buy 1,000 credit card numbers off the internet from any number of illegal dealers, with far less risk of being captured on a security camera.

Focus on real threats

I have no problem with someone buying an RFID-protecting wallet or a Faraday bag for a cellphone or car keys. We all make our own risk and buying decisions on a daily basis. I’m just saying that for most people it doesn’t make much sense.

We’re each hit by a myriad of risks every day. In the computer world alone, we get introduced to somewhere around 13 to 16 new individual security vulnerabilities every day, year after year. They never stop coming.

A prudent person looks at the various risks, weighs the likelihood and potential damage of each of them against the other, and picks those to spend time and money on.

I use the example of people who visit me in Key Largo: Almost all of my visitors worry about potential shark attacks when we go snorkeling and diving. Some are so terrified they won’t get in the water. I tell them there has never been a documented, unprovoked shark attack in the history of Key Largo (at least since the 1800s, if not earlier). The risk of shark attacks worldwide is something like one in 1 million (70 to 100 deaths among hundreds of millions of potential encounters). But the odds that those same people might be killed by driving their car to my house are about 1 in 12,300. As humans, we are terrible at ranking risks, even when told the true odds.

Where I was wrong

I have one update to the original post: I said most of the credit cards in the world don’t have RFID in them. That’s still true. But in some countries, like Canada and Poland, RFID-enabled credit cards are the norm. Even in those countries, I can’t find reports of real RFID-scanning criminals.

Of course, cases of RFID-scanning criminals caught by police may simply have not made it to the web yet—but you’d think that the dozens of vendors selling RFID-protecting wallets and purses would be linking to those stories like crazy. Guess what? They haven’t.

Still, if I haven’t convinced you, go ahead and buy that RFID-protecting wallet. It’s your money and your risk decision. Me, I’ll wait until I hear that RFID crime is on the rise—or better yet, until I have an RFID-enabled credit card. Friends who have shown me their RFID wallets did so because their new credit cards came with a chip, which they assumed was RFID in nature. It wasn’t. They were carrying the regular, nonwireless, chip-and-pin cards.

To comment on this article and other InfoWorld content, visit InfoWorld's LinkedIn page, Facebook page and Twitter stream.

InfoWorld Security Adviser

mobile security strippedWe’re all familiar with the cartoon image of a character stopping a water leak by plugging a finger into the hole, only for another leak to start, needing another finger, and so on, until the character is soaked by a wave of water.

It’s a little like the current, fragmented state of mobile security – the range of threats is growing fast, outpacing current security measures. Also, the devices themselves have inherent vulnerabilities that can be exploited by resourceful attackers. So it’s no surprise that enterprises are struggling with the issue of mobile security.

Finding flaws and mRATs

The list of potential security challenges and vulnerabilities across Android and iOS devices is complex. It starts with the devices’ mobility: they are connecting to public cellular networks, corporate networks, public hotspots to home internet providers and back again. This makes them vulnerable to Man in the Middle (MitM) attacks via rogue cellular base stations, WiFi hotspots or compromised public networks, allowing attackers to track, intercept and eavesdrop on data traffic and even voice calls, using SS7 protocol exploits.

Then, the Android and iOS mobile operating systems themselves have been shown time and time again to be plagued with vulnerabilities that smart malicious hackers can exploit to their advantage. One major recent example is ‘Quadrooter’, a privilege escalation vulnerability shown to affect over 900 million Android devices. These vulnerabilities often have long patching cycles which can take months to roll out, leaving millions of devices vulnerable to remote attack.

Similarly, iOS has also recently been in the headlines after news broke that it had been compromised in the NSO hack. This affected all Apple devices, making the iOS, the phones resources and any application running on it, including security apps such as anti-virus, vulnerable to attack. It’s worth highlighting that this wasn’t discovered by Apple or any detection applications but was only discovered because the attacker was negligent in concealing it.

Mobile remote access trojans (mRATs) give an attacker the ability to remotely access the resources and functions on Android or iOS devices, and stealthily exfiltrate data without the user being aware. mRATs are often embedded in supposedly benign apps available from appstores. Compromised or falsely certified apps are another security risk, as they can allow attackers to remotely take over devices, using the device resources without the user being aware.

As a result, the mobile security industry is always playing catch-up. Zero-day attacks, where cybercriminals exploit inbuilt vulnerabilities on mobile operating systems that haven’t yet been patched or even identified, are a major ongoing problem.

Protection versus performance

Ultimately, there are three main threat vectors for mobile devices. These are: targeting and intercepting the communications to and from devices; targeting the devices’ external interfaces (Cellular, WiFI, Bluetooth, USB, NFC, Web etc.) for the purpose of device penetration and planting malicious code (virtually as well as physically); and targeting the data on the device and the resources/functions the device/underlying OS provides access to such as microphone, camera, GPS, storage, network connectivity, etc.

While there is a wealth of technologies designed to help manage the security gaps on devices – from Enterprise Mobile Management to mobile anti-malware– these protections come at a price. First, a collection of multiple security tools and processes is a big drain on processing power, complex to manage, and doesn’t really fix the underlying device and OS vulnerabilities. Second, the conventional approach to mobile security is based on locking down or denying features and functions. This causes further problems on the end user’s acceptance front. It’s critical to balance security and usability: If protecting the device forces people to change the way they use it, they will find workarounds that will also undermine security measures.

So if enterprises are to continue harnessing the benefits of mobile devices without compromising their performance and usability, then we need to rethink our approach to mobile security, from the ground up.

Secure foundations

This new approach starts with the foundations of the mobile device: the OS and firmware. As the various software layers on devices have fundamental vulnerabilities which can be exploited, these should be replaced with secure, hardened versions from which the flaws have been removed/patched and advanced security layers have been put in place to effectively manage and protect against those three threat vectors mentioned above. This means attackers cannot use their conventional techniques to target vulnerabilities – but the device is still using an OS that the user is familiar with, giving users access to the full app ecosystem, so usability is not affected or restricted.

This stronger foundation is then used to build a strong, security architecture consisting of four layers to address each of the three main mobile threat vectors. The first layer is the Encryption Layer, in charge of encrypting all data stored on the phone, as well as all traffic from and to the device, securing all communications, whether voice, data or messaging, from any network sniffing and man-in-the-middle attacks.

The second layer is the Protection Layer, securing the device’s externally available interfaces, from WiFi, cellular, USB, NFC, Bluetooth to web. These need protecting against threats using an embedded firewall to monitor and block all downloads and exploit attempts.

Next layer is the Prevention Layer, monitoring for unauthorized attempts to access operating system functions like stored data, the microphone or camera, location technology and so on. These need their own specialist protective technologies.

The final layer is the Detection and Enforcement Layer monitoring, detecting and blocking execution attempts of malicious code or misbehaving apps, in the same way that we currently monitor for device and network anomalies on corporate networks.

In conclusion, mobile security is currently too fragmented, and the range of threats growing too fast for conventional protections. Instead of plugging leaks as they appear, we need to start again, from the foundations up – and fundamentally rethink the way in which we protect and secure mobile devices.

Help Net Security

blog_windows_server_2016_GA_SQThe new Microsoft’s server operating system is finally here, and we’ve prepared a list of the most important new features, including the ones you won’t find on other blogs.

The newest release of Microsoft’s server operating system, Windows Server 2016, hit general availability on September 26th, along with System Center 2016. We’ve been hearing about new and improved things coming in Windows Server 2016 for months, so you most probably know about the container support and the improved security and networking tools. Maybe you’ve even used some of them in the technology preview versions.

But in case you’ve been holding out for GA, or your working day consisting of endless tickets simply doesn’t allow you to find time to tryout betas and technology previews, we’ve prepared a closer look at the top 10 features in Windows Server 2016 that every sysadmin needs to know about.

The next evolution of Server Core – Nano Server, is an even more thinned down version of Windows Server 2016. A Nano server must be managed remotely and can only run 64 bit applications, but it can be optimized for minimum resources, requires far less patching, restarts very quickly, and can perform a number of specific tasks very well with minimal hardware.

Good uses for Nano Server include IIS, DNS, F&P, application servers, and compute nodes. So if you liked Server Core, you will love Nano; and if you never really understood Server Core, you should give Nano a chance, especially if patching and downtime are challenges in your 24×7 shop.

Windows Server 2016 comes with PowerShell 5.0, a part of the Windows Management Framework 5.0. There are many improvements in PS5 (you’ll find a complete list in this blog post), including support for developing your own classes, or a new module called PackageManagement, which lets you discover and install software packages on the Internet.

The Workflow debugger now supports command or tab completion, and you can debug nested workflow functions. To enter it in a running script you can now press Ctrl+Break, in both local and remote sessions, and also in a workflow script. And PS5 now runs in Nano server directly, so administration of this lightweight server platform is made even simpler.

Windows Server 2016 offers two kinds of containers to improve process isolation, performance, security, and scalability. Windows Server Containers can be used to isolate applications with a dedicated process and a namespace, while Hyper-V Containers appear to be entire machines optimized for the application.

Windows Server Containers share a kernel with the host, while Hyper-V Containers have their own kernel, and both enable you to get more out of your physical hardware investments. On top of this, Microsoft announced that all Windows Server 2016 customers will get the Commercially Supported Docker Engine for no additional cost, enabling applications delivered through Docker containers to run on Windows Server on-premise installations or in the cloud, on Azure.

WS2016 brings some huge improvements to Active Directory, security, and identity management, such as Privileged Access Management (PAM), restricting privileged access within an existing Active Directory environment. In this model you have a bastion forest, sometimes called a red forest, that is where administrative accounts live and which can be heavily isolated to ensure it remains secure. Just-in-Time administration, privileged access request workflows, and improved audition are all included, and best of all – you don’t have to replace all of your DCs to take advantage of this.

“Just Enough Administration” is a new capability in Windows Server 2016 that enables administrators to delegate anything that can be managed through PowerShell. Do you have a developer who needs to be able to bounce services or restart app pools on a server, but not log on or make any other changes? With JEA you can give him or her exactly those abilities, and nothing more. Of course, you may have to write some PS1s to let them actually do that, but the point is that now you can.

Customers who want to set up highly-available RDS environments, but not go to the trouble and expense of setting up HA SQL, can now use an Azure SQL DB for their Remote Desktop Connection Broker, making it both easier and less expensive to set up a resilient virtual desktop environment.

The RD Connection Broker can now handle massively concurrent connection situations, commonly known as the “log on storm”, and it has been tested to handle more than 10k concurrent connection requests without failures.

Software-defined storage enables you to create HA data storage infrastructures that can easily scale out, without breaking the bank. With software defined storage, even SMBs can start to take advantage of high availability storage with the existing budgets.

Three new features take over the stage: Storage Spaces Direct enables you to combine commodity hardware with availability software, providing performance for virtual machines, Storage Replica replicates data at the volume level in either synchronous or asynchronous modes, while Storage QoS guards against poor performance in a multitenant environment.

If you have set up an NTP server on your network, or subscribed to NTP services from an NTP pool, you know how important accurate time can be. Typically, Windows environments were less worried about accurate time, and more concerned with a consensus of time, with a five-minute drift being acceptable.

Now in Windows Server 2016, the new time service can support up to a 1ms accuracy, which should be enough to meet almost all needs – if you need more accuracy than that, you probably own your own atomic clock.

Immensely valuable in a virtualization environment, software-defined networking enables administrators to set up networking in their Hyper-V environment similar to what they can in Azure, including virtual LANs, routing, software firewalls, and more.

You can also do virtual routing and mirroring, so you can enable security devices to view traffic without expensive taps.

There are so many security improvements in Windows Server 2016 that we could do an entire post just on that, which, as a matter of fact, we will in the coming weeks. For now, be aware that WS2016 includes improvements to protect user credentials with Credential Guard and Remote Credential Guard, and to protect the operating system with Code Integrity, with a whole host of improvements with virtual machines, new antimalware capabilities in Windows Defender, and much more.

As stated on the Windows Server team’s blog post announcing the new version, Windows Server 2016 is immediately available for evaluation, and will be available for purchase with the first October price list, while volume licensing customers will be able to download fully licensed software at General Availability in mid-October.

Watch out for new posts on this blog for more information on Windows Server 2016, as we will take a deeper dive into some of the most significant features for SMB organizations, as well as a much closer look at the security improvements in the next few weeks. You can subscribe here and get the new blog post announcements directly in your inbox.

Until then, please leave a comment below and let us know what feature you find most interesting or have been particularly looking forward to.

You may also like:

  • New Microsoft licensing models bring new software bundles to enterprises
  • The top 23 Cmd-line tools on my computer, and where…
  • Troubleshooting the top 22 Exchange issues

GFI Blog

Sysadmins and devs, fresh from a weekend spoiled by last week's OpenSSL emergency patch, have another emergency patch to install.

One of last week's fixes, for CVE-2016-6307, created CVE-2016-6309, a dangling pointer security vulnerability.

As the fresh advisory states: “The patch applied to address CVE-2016-6307 resulted in an issue where if a message larger than approx 16k is received, then the underlying buffer to store the incoming message is reallocated and moved.

“Unfortunately a dangling pointer to the old location is left, which results in an attempt to write to the previously freed location. This is likely to result in a crash, however it could potentially lead to execution of arbitrary code.”

OpenSSL 1.1.0 users need to install 1.1.0b.

That one, rated critical, was turned up by Robert Święcki of the Google Security Team.

In the other bug (CVE-2016-7052), OpenSSL 1.0.2i omitted a certificate revocation list (CRL) sanity check from 1.1.0, meaning “any attempt to use CRLs in OpenSSL 1.0.2i will crash with a null pointer exception.” Grab OpenSSL 1.0.2j to fix that one.

The latest patched code is available here or from your favorite operating system distribution. ®

Sponsored: Flash storage buyer's guide

The Register - Security

Writing secure applications doesn't mean simply checking the code you've written to make sure there are no logic errors or coding mistakes. Attackers are increasingly targeting vulnerabilities in third-party libraries as part of their attacks, so you have to check the safety of all the dependencies and components, too.

In manufacturing, companies create a bill of materials, listing in detail all the items included when building a product so that buyers know exactly what they're buying. Processed food packaging, for example, typically tells you what's inside so that you can make an informed buying decision.

[ Also on InfoWorld: 19 open source GitHub projects for security pros. | Discover how to secure your systems with InfoWorld's Security newsletter. ]

When it comes to software, untangling the code to know what libraries are in use and which dependencies exist is hard. It's a challenge most IT teams don't have the time or resources to unravel.

"You don't want to purchase spoiled food, buy a car with defective air bags, or have a relative receive a defective pacemaker," says Derek Weeks, vice president and devops advocate at Sonatype, a software supply chain automation provider. Yet we surprisingly don't demand the same rules for software.

Tell me what's inside

At the very least, a software bill of materials should describe the components included in the application, the version and build of the components in use, and the license types for each component.

To take one example, IT administrators would have had a far easier time back in April 2014 when the Heartbleed vulnerability was initially disclosed if they'd had a bill of materials on hand for every application running in their environment. Instead of testing every application to determine whether OpenSSL was included, IT could have checked the list and known right away which ones depended on the vulnerable version and needed action.

Other nice-to-have information would be details like the location within the source code where that component is being called, the list of all tools used to build the application, and relevant build scripts.

Today's developers rely heavily on open source and other third-party components, and an estimated 80 to 90 percent of an application may consist of code written by someone else. According to statistics collected by Sonatype, the average application has 106 components. It doesn't matter if the problem is in one of those components. The organization is responsible for the entire software chain and is on the hook if a vulnerability in the library results in a security incident.

Black boxes

When organizations buy software -- either commercial or open source -- they have only a limited visibility in what components are in use. Especially diligent teams may look at the code to see which libraries are included, but libraries can call other components and easily go more than two levels deep.

"People aren't even sure what they're using, especially when libraries call other libraries that they don't even know about," says Mark Curphey, CEO of software security company Sourceclear.

As many as one in 16 components used by development teams has a known security defect, according to Sonatype's 2016 State of the Software Supply Chain report. It's the equivalent of being told 6 percent of the parts used in building a car were defective, but nobody knew which part or who supplied it, Weeks says. A car owner would not accept that answer, nor should software owners.

Some software buyers are taking a stand. Both Exxon and the Mayo Clinic, for example, require software suppliers to provide a software bill of materials in order to discover potential security and licensing problems or whether the application is using an outdated version of the library.

When such problems are found, an administrator can ask the supplier to rebuild the application with the newer version. While waiting for the updated software, IT has the opportunity to put in temporary mitigations to protect the application from attackers looking to exploit the vulnerability. A software bill of materials also helps administrators perform spot checks of applications and code whenever a vulnerability is disclosed or a core library, such as OpenSSL, releases a new version.

Just because a component doesn't have any known bugs at the moment is not an argument for its safety. Some components may be at the latest available version but are several years old. If administrators and developers have the right information, they can decide whether or not they want to risk using an application containing an old, possibly unsupported, component.

Similar, but different programs

Understanding what components are being used is not only an open source software problem. Several efforts are underway to establish certification and testing laboratories focused on the security of software. Unlike the bill of materials, which helps software owners stay on top of maintenance and updates, these efforts focus on assisting buyers with the purchase decisions.

The Underwriters Laboratories rolled out a voluntary Cybersecurity Assurance Program (UL CAP) earlier this year for the internet of things and critical infrastructure vendors to assess the security vulnerability and weaknesses in their products against a set of security standards. UL CAP can be used as a procurement tool for buyers of critical infrastructure and IoT equipment. ICSA Labs has a similar IoT Certification Testing program that tests IoT devices on how they handle alert/logging, cryptography, authentication, communications, physical security, and platform security. An ICSA Labs certification means that the product underwent a testing program and that vulnerabilities and weaknesses were fixed.

The Online Trust Alliance has an IoT Trust Framework, which is a set of specifications IoT manufacturers should follow in order to build security and privacy -- such as unique passwords, encrypted traffic, and patching mechanisms -- into their connected devices. The framework will eventually become a global certification program, but for the moment, it's more of a guidance on what to do correctly.

At this year's Black Hat conference, Peiter Zatko, the famous hacker known as Mudge, and Sarah Zatko unveiled a Consumer Reports-style ratings system, Cyber Independent Testing Lab, to measure the relative security and difficulty of exploitation for various applications. CITL's methodology includes looking for known bad functions and how often the application uses them, as well as comparing how frequently good functions are called as opposed to the bad ones.

"We as security practitioners tend to focus on exploitability, but as a consumer of a product, they're almost always going to say disruptability is what bothers them," Zatko said during his presentation. The plan is to release large-scale fuzzing results by the end of 2017.

Track ingredients for better security

Attackers have shifted their focus upstream to look at the components because targeting a library vulnerability gives them more victims than focusing on only a single application. The serialization flaw in Apache Common Core is a good example of how such flaws can be missed. An administrator may think there's nothing to worry about because the organization doesn't use JBoss, not realizing another application they rely on may be using the vulnerable collection code and is susceptible.

A software bill of materials helps administrators gain visibility into the components used in applications and discover potential security and licensing problems. More important, administrators can use the list to spot-check applications and code from suppliers to obtain an accurate view of potential vulnerabilities and weaknesses, as well as roll out patches in a timely manner.

InfoWorld Security

It’s no secret. We’re really bad at passwords. Nevertheless, they aren’t going away any time soon.

With so many websites and online applications requiring us to create accounts and think up passwords in a hurry, it’s no wonder so many of us struggle to follow the advice of so-called password security experts.

At the same time, the computing power available for password cracking just gets bigger and bigger.

OK, so I started with the bad news, but this cloud does have a silver lining.

It doesn’t need to be as hard as we make it and the government is here to help.

That’s right, the United States National Institute for Standards and Technology (NIST) is formulating new guidelines for password policies to be used in the whole of the US government (the public sector).

Why is this important? Because the policies are sensible and a great template for all of us to use within our own organizations and application development programs.

Anyone interested in the draft specification for Special Publication 800-63-3: Digital Authentication Guidelines can review it as it evolves over on Github.

For a more human approach, security researcher Jim Fenton did a presentation earlier this month at the PasswordsCon event in Las Vegas that sums up the changes nicely.

What’s new ?

What are the major differences between current received wisdom about “secure passwords” and what NIST is now recommending?

Some of the recommendations you can probably guess; others may surprise you.

We’ll start with the things you should do.

Favor the user. To begin with, make your password policies user friendly and put the burden on the verifier when possible.

In other words, we need to stop asking users to do things that aren’t actually improving security.

Much research has gone into the efficacy of many of our so-called “best practices” and it turns out they don’t help enough to be worth the pain they cause.

Size matters. At least it does when it comes to passwords. NIST’s new guidelines say you need a minimum of 8 characters. (That’s not a maximum minimum – you can increase the minimum password length for more sensitive accounts.)

Better yet, NIST says you should allow a maximum length of at least 64, so no more “Sorry, your password can’t be longer than 16 characters.”

Applications must allow all printable ASCII characters, including spaces, and should accept all UNICODE characters, too, including emoji!

This is great advice, and considering that passwords must be hashed and salted when stored (which converts them to a fixed-length representation) there shouldn’t be unnecessary restrictions on length.

We often advise people to use passphrases, so they should be allowed to use all common punctuation characters and any language to improve usability and increase variety.

Check new passwords against a dictionary of known-bad choices. You don’t want to let people use ChangeMe, thisisapassword, yankees, and so on.

More research needs to be done into how to choose and use your “banned list,” but Jim Fenton thinks that 10,000 entries is a good starting point.

The don’ts

Now for all the things you shouldn’t do.

No composition rules. What this means is, no more rules that force you to use particular characters or combinations, like those daunting conditions on some password reset pages that say, “Your password must contain one lowercase letter, one uppercase letter, one number, four symbols but not &%#@_, and the surname of at least one astronaut.”

Let people choose freely, and encourage longer phrases instead of hard-to-remember passwords or illusory complexity such as pA55w+rd.

No password hints. None. If I wanted people have a better chance at guessing my password, I’d write it on a note attached to my screen.

People set passwords hints like rhymes with assword when you allow hints. (Really! We have some astonishing examples from Adobe’s 2013 password breach.)

Knowledge-based authentication (KBA) is out. KBA is when a site says, “Pick from a list of questions – Where did you attend high school? What’s your favourite football team? – and tell us the answer in case we ever need to check that it’s you.”

No more expiration without reason. This is my favourite piece of advice: If we want users to comply and choose long, hard-to-guess passwords, we shouldn’t make them change those passwords unnecessarily.

The only time passwords should be reset is when they are forgotten, if they have been phished, or if you think (or know) that your password database has been stolen and could therefore be subjected to an offline brute-force attack.

There’s more…

NIST also provides some other very worthwhile advice.

All passwords must be hashed, salted and stretched, as we explain in our article How to store your users’ password safely.

You need a salt of 32 bits or more, a keyed HMAC hash using SHA-1, SHA-2 or SHA-3, and the “stretching” algorithm PBKDF2 with at least 10,000 iterations.

Password hashing enthusiasts are probably wondering, “What about bcrypt and scrypt?” In our own How to article, we listed both of these as possibilities, but wrote, “We’ll recommend PBKDF2 here because it is based on hashing primitives that satisfy many national and international standards.” NIST followed the same reasoning.

Additionally, and this is a big change: SMS should no longer be used in two-factor authentication (2FA).

There are many problems with the security of SMS delivery, including malware that can redirect text messages; attacks against the mobile phone network (such as the so-called SS7 hack); and mobile phone number portability.

Phone ports, also known as SIM swaps, are where your mobile provider issues you a new SIM card to replace one that’s been lost, damaged, stolen or that is the wrong size for your new phone.

In many countries it is unfortunately far too easy for criminals to convince a mobile phone store to transfer someone’s phone number to a new SIM and therefore hijacking all their text messages.

What next?

This is just the tip of the iceberg, but certainly some of the most important bits.

Password policies need to evolve as we learn more about how people use and abuse them.

Sadly there have been more than enough breaches for us to see the impacts of certain types of policy, such as the evidence shown above from Adobe’s 2013 hack about the danger of password hints.

NIST’s goal is to get us to protect ourselves reliably without unneeded complexity, because complexity works against security.

What are your thoughts on these changes? Will you implement them for your organization? Tell us in the comments.


(Audio player above not working? Download MP3, listen on Soundcloud or access via iTunes.)

Information Security Podcasts

Las Vegas -- The Black Hat 2016 conference keynote was a call to action in both the public and private sectors to focus on ways to make dealing with cyberthreats faster and more efficient.

Dan Kaminsky, security researcher, co-founder and chief scientist of White Ops, began by saying it is important not to underestimate speed when it comes to making security decisions.

"Speed has totally changed, what was once months has become minutes. Everything has changed," Kaminsky said. "What you can build, what gets broken and how long we have to learn and adapt from our experiences, those cycles have gotten so fast. And our need to make things secure and functional and effective has just exploded."

Kaminsky described a number of different things, from large projects to small moments in development, that can impact security.

"People think that it's a zero sum game, that if you're going to get security everyone else has to suffer. Well, if we want to get security, let's make life better for everybody else. Let's go ahead and give people environments that are easy to work with," Kaminsky said. "Think in terms of milliseconds. Think in terms of the lines that you're impacting, the time that you're taking, the difficulty in making something scale out not just for your own use but for the use of the world. This is the game to play."

Kaminsky dug deep into the history of the internet and the gritty details of code to identify ways to improve speed, but two themes came back time and again in his talk.

First, Kaminsky said information sharing is a critical way to improve security in the short-term. He said managers have all had the experience of assigning engineers to fix a security issue "that has probably been fixed a thousand times, so maybe we should start actually releasing the code that we're doing … If you actually want your coworkers to solve a problem not repeatedly, it might be cheaper and [more] cost effective for you to just give it to the world."

"Bugs are not random. Fixes are not random either. We're not taking all of the lessons we have to deal with and actually dealing with them," Kaminsky said. He noted talking to a group of bankers who shared code and fixes with each other. "He said, 'Yeah, we don't compete on security. If one of us gets hit, we're all going down so we should probably share our information.'"

For longer term projects, Kaminsky said we needed to see more work from the public sector.

"I believe in all projects in terms of timelines," Kaminsky said in a press conference following his keynote. "How long is it going to take to do this? Some things are just going to take three years of effort and the longer the timeline, the less it's something that private sector is good at and the more it's something the public sector is good at. How do I get a hundred nerds working on a project for ten years and not getting interrupted and not getting harassed and not getting told to do different things? The way you don't make it happen is how we're doing it in infosec today, which is the spare time of a small number of highly paid consultants. We can do better than that."

Kaminsky said he wants something like the National Institutes of Health (NIH) for cyber -- a public works organization to take on long-term research projects with stable funding.

"I want an organization dedicated to the extended study of infosec, that can fund and implement the hard and sometimes really boring work that fixing all these problems is going to take," Kaminsky said.

One example of such an effort was the work done by the Software Assurance Metrics And Tool Evaluation (SAMATE) project at the National Institute of Standards and Technology (NIST), which Kaminsky described as "the greatest scut work" he's ever seen.

"They went ahead and they collected variants of every single vulnerability in C and Java, and there's like thousands, and they went ahead and made it so you can compile them into one program," Kaminsky said, and he described the value of such a body of work for all of the companies working on static analysis tools. "That stuff may exist in the bowels of Microsoft or Oracle or many other companies, but it was NIST that got it out the door."

No matter the aim, short-term or long, Kaminsky stressed the value in sharing information and being open with knowledge.

"Experts and users have different things in mind for their technology. I don't mind if you just want to work on your own stuff," Kaminsky said. "But the real magic comes when you take the expertise that you've got in security and you translate it and you rebuild it and you reform it. Don't be afraid to take the knowledge you have and make it accessible to vastly more people."

Next Steps

Learn more how to shift IT security budgets to focus on attack detection and response.

Find out how to use security tools to automate incident response.

Get more information on the differences between dynamic code analysis and static analysis for source code testing.

SearchSecurity: Security Wire Daily News