Time

Original release date: November 21, 2016

The Network Time Foundation's NTP Project has released version ntp-4.2.8p9 to address multiple vulnerabilities in ntpd. Exploitation of some of these vulnerabilities may allow a remote attacker to cause a denial-of-service condition.

US-CERT encourages users and administrators to review Vulnerability Note VU#633847 and the NTP Security Notice Page for vulnerability and mitigation details.

This product is provided subject to this Notification and this Privacy & Use policy.

Was this document helpful?  Yes  |  Somewhat  |  No


US-CERT Current Activity

There is little question that the perpetrators of cyberthreats spend little time thinking inside the box — that’s how they stay ahead of their victims. It’s time for some out-of-the-box thinking of our own to get serious about fighting back. It’s time for the democratization of cybersecurity data.

Here is the challenge to users, organizations and security vendors alike: First, we should aggressively democratize the threat data we all have and share it securely yet freely with each other. Second, we should pivot a full 180 degrees from the accepted practice of automatically classifying, by default, all cyberthreat data. Instead, we should declassify threat data by default. Hence, the democratization of cybersecurity data.

Thinking Outside the Box

Cybercrime information sharing is nothing new. Unfortunately, the wrong people have been doing the sharing, and they have elevated the practice to a commercial art form. Cooperating and collaborating on the Dark Web, the most sophisticated cybercriminals build and peddle attack software to each other. They even have seller ratings and rankings for their malware, with the most effective earning five stars. They offer gold, silver and bronze levels of service — even money-back guarantees if the malicious efforts fail.

With thieves as organized and sophisticated as they are, it is a small wonder that estimates of their annual take in illegal profits total $ 455 billion These aren’t amateurs. The United Nations estimated that highly organized, well-funded criminal gangs account for 80 percent of breaches today.

For these and so many other good reasons, the time is now for businesses, governments and other organizations to elevate cyberthreat information sharing to entirely new levels. The public sector has initiated steps in this direction. Last year the U.S. passed the Cyber Information Security Act (CISA). Its goal is to help organizations share cyberthreat information and actual attack data anonymously and without fear of liability.

Democratization of Cybersecurity Data Dents Cybercrime

There are massive collections of cybercrime data largely kept under lock and key in individual organizations. Security vendors, including IBM, typically have the largest repositories.

Why has it been kept secret? Both security vendors and businesses tend hold onto this data for its perceived competitive value. It is valuable to some extent, but the potential gains of having that much threat data and information can be an even more formidable competitive weapon. After all, it isn’t possessing the data that yields an advantage; it’s what each organization or vendor does with it.

This kind of sharing is not new in our business. The whole open source movement that gave us Linux, OpenStack, Hadoop, Spark and so much more resulted from aggressive information sharing. It can be the same with cyberthreat data. Large-scale sharing of threat data will signal a new high water mark in fighting cybercrime.

We are walking the walk at IBM, recognizing that we were as much a part of the problem as any other business or organization. That is why IBM published all of its actionable, third-party global threat data — all 700 terabytes of it. This includes real-time indicators of live attacks.

We believe the free consumption and sharing of real-time threat data from our repository can put a sizable dent in cybercrime efforts. Think of what else we can accomplish with the democratization of cybersecurity data.

Information Sharing at the Speed of Business

As mentioned earlier, sharing is only one part of the out-of-the-box thinking we need to adopt. We have to share this information as soon as possible, not weeks or months after a major breach.

The default action today is to immediately classify such information, rendering it unshareable until it is eventually declassified. Instead, put a timeline on classification of new threat data — maybe 48 or 72 hours, no more. If no valid, justifiable case is made for continued classification within that period, release it to be shared among other organizations. The aforementioned CISA spells out methods for doing this securely so the information doesn’t fall into the wrong hands.

We must abandon the Cold War mentality that leads us to classify all information and share nothing. We are all engaged in a very hot war with cybercriminals. Speed matters when it comes to using relevant data to stop active attacks and thwart future threats. Information sharing at the speed of business can be a formidable weapon — we just need to unleash it.

Learn more about staying ahead of threats with global threat intelligence and automated protection


Security Intelligence

Starting New Year's Day, Google will begin labeling as "insecure" all websites that transmit passwords or ask for credit card details.

If you use the company's Chrome browser, and a lot of people do, in its 56th build and onwards any website that does not use a security certificate will feature a red exclamation mark and the text "Not secure," also in red, at the start of the web address.

Those that do use certificates and so have an HTTPS connection will continue to get a nice little green padlock icon.

The decision was announced on Google's security blog and will "help users browse the web safely." It is part of "a long-term plan to mark all HTTP sites as non-secure."

If a website is not secured, it is possible for someone else to interfere with the website before you see it, if they are on the same network. For a long time, only websites with serious security concerns bothered to get a certificate – like businesses taking payments or banks allowing people to log into their accounts.

Over time, however, it has grown increasingly important to include additional security. Google notes that more than half the pages accessed through Chrome are now HTTPS – something it has decided is a milestone.

Despite its long existence, introducing a security certificate to a website can still throw up problems. There is also an additional and persistent cost that is small, but still acts as a barrier to small businesses in particular.

Google worries that users are really aware of the potential risk they face if they provide login details and credit card information to a website that does not have an electronic security certificate.

Of course, HTTPS does not provide a certainty of security. Just this week a study showed that millions of internet-connected devices make the keys used for encrypting information readily available, immediately undermining whatever additional security the certificate provides.

Not everyone is excited about the prospect of moving to full HTTPS, either. As one NASA sysadmin pleaded earlier this year, lots of people rely on plain old HTTP to peer-share information.

"Studies show that users do not perceive the lack of a 'secure' icon as a warning," his post states. "Users [also] become blind to warnings that occur too frequently."

Chrome 56 will label HTTP pages with password or credit card form fields as "not secure," and future releases will extend that approach to all HTTP pages if people are visiting them in its "incognito" mode.

"Eventually, we plan to label all HTTP pages as non-secure, and change the HTTP security indicator to the red triangle that we use for broken HTTPS," the post says. ®

Sponsored: Fast data protection ROI?


The Register - Security

Black Hat Dan Kaminsky, the savior of DNS and chief scientist for White Ops, has used the opening keynote of Black Hat 2016 to outline three technologies he has been working on that could make working online a lot safer – if they are adopted.

First, and most importantly, Kaminsky has been developing a micro-sandboxing system that spins up small virtual machines (VMs) to carry out sensitive tasks, limiting their ability to infect other parts of the system.

Dubbed Autoclave, it limits the ability of the code running in the VM to communicate, and monitors what's going on inside to make sure there are no unexplained requests. The name comes from the heated chambers used to sterilize surgical equipment.

Container technology is perfect for this, Kaminsky told The Register before the show, since it has great application compatibility. He cited Docker as a great example of what could be used, but other container systems could also spin up VMs in milliseconds to cut down the processor lag that might turn off some users.

The downside is that, at the moment, none of the major cloud vendors are going to support this kind of rapid spinning up and down of VMs. Amazon and Google won't support the Autoclave as it stands at the moment, and Azure can only do so in limited circumstances – but Kaminsky said that if enough people demand it, they could.

Kaminsky is expecting this to be a long fight, similar to the one about medical germ theory, which he references in the name of the system. Hopefully he won't end up like Ignaz Semmelweis, who provided the first empirical proof of germ theory but was shunned by the medical community and ended up a crazed alcoholic.

The second piece of technology is IronFrame, the theory for which Kaminsky outlined at last year's DEFCON. IronFrame can be built into a "magic browser," he said, which would allow web designers to build webpages that allow functions in a known safe state.

If a new software build isn't finalized then it can be embedded in the browser and run as a separate file, while suppressing extraneous functions. It would also allow direct contact with third-party web functions without having to leave a target page. As Kaminsky is an advisor to the World Wide Web Consortium, it's possible that IronFrame could be put into future browser specifications, but that's a ways down the line. In the meantime, Kaminsky said, it would allow web designers to have better control of what's on their web pages and would let users try out new features without imperiling their systems.

Kaminsky's third idea is, he acknowledged, a bit out there - which is why he didn't talk about it at Black Hat. The technology, dubbed Astatica, aims to apply machine learning techniques to software training for fleshy humans.

"It wasn't until I tried to learn machine learning that I understood how so many people have problems with security," he said. "We are terrible at teaching people how to make things secure. We're not paying enough attention to what they need."

Astatica uses CSV files to process information and suggest new ways of learning about security issues. The system is still in its early stages, but Kaminsky says it could be a major breakthrough in teaching people about security.

None of these technologies is going to fix the internet instantly; it's a long-term process, he said. Ideally this is something government should be devoted to fixing long term (as in five to ten years of research). Business won't do it, he said, because it only thinks about the next quarter's results.

But action is desperately needed, he opined, because for the first time people are actually losing confidence in the internet. He cited the pathetic security of Internet of Things devices, which has left people assuming technology is unsafe, and this could provide a stimulus for change.

"We have the opportunity, we've got the interest, we've got the – I hate to say it – fear," Kaminsky said. "Not all fear is FUD – things are actually getting compromised – so let's figure out why this is hard, and let's go fix it." ®

Sponsored: 2016 Cyberthreat defense report


The Register - Security


Munish Gupta

Senior Security Architect, Infosys Ltd

Munish Gupta, CISSP is working as Principal - Information & Cyber Risk Management with Infosys Ltd (NYSE: INFY) . He has rich experience of around 13+ years in providing...

See All Posts

During most of my discussions with C-level stakeholders, a major question arises: What can we do to add further defense to our public cloud deployments? Are the built-in cloud security controls sufficient for my business, or do we need more?

My answer is quite simple: Yes, you need additional controls, depending on the workloads you are going to deploy in public clouds.

A Shared Responsibility

Infrastructure-as-a-service (IaaS) cloud services built on the shared responsibility model allow the organization to assess what additional security controls are needed. It’s the organization’s responsibility to secure the workload hosted in cloud environment; the cloud provider will be responsible below the operating system (OS) layer only.

While cloud providers are getting more mature and trying to offer a broader range of cloud security controls, it’s still the bare minimum. AWS provides the security group and NaCl capabilities as the minimum, for example, but not intrusion prevention controls, web application level security, threat intelligence capabilities or other features.

Another problem is that public cloud providers only provide site-to-site VPN, not client-to-site termination capability. This means that you need to route your user traffic (even for your remote users) through your corporate network, which results in performance issues, additional cost and increased complexity.

Next-Gen Cloud Security Controls

So now the question is: What should we do to add additional security controls at the perimeter of our public cloud deployments?

Security gateways and firewall providers are getting quite mature, and offerings are more aligned with industry needs. Next-gen firewall and UTM products are built for virtual environments from the bottom-up.

Investing more into these next-gen products will ensure better control of your public cloud and make your life easier during audits. Deploying a next-gen virtual firewall in your public cloud environment will also allow your security administrator to define context-based rules and ACLs to protect your environment. Additionally, it will permit your remote users to directly terminate to the public cloud environment.

In a nutshell, additional cloud security controls are worth every penny of your investment.

Read the IDC white paper: A CISO’s Guide to Enabling a Cloud Security Strategy

Topics: Cloud, Cloud Security, Cloud Services, Infrastructure-as-a-Service (IaaS)


Security Intelligence