Cybersecurity, Emerging Issues in Security, Facility Security

Is Your Organization Prepared to Recognize and Address an Insider Threat?

Update: News items that revolve around car maker Tesla and its maverick CEO Elon Musk tend to attract a great deal of attention. Yet somehow, despite all the attention, the events in question have a fuzzy quality about them. No one situation exemplifies this better than last summer’s story about Musk’s claim about an insider threat who stole intellectual property and passed it on to competitors. In a follow up to the original Business Insider reporting, Bloomberg Businessweek examined some of the twists and turns this story has taken, and fleshed out some of the murkier details.

Interested curious corporate spy looking at colleagues laptop, spying on rival, cheating on examination, stealing idea, sneaking peek, taking inquisitive glance at computer screen of unaware coworker

fizkes / iStock / Getty Images Plus / Getty Images

In the aftermath of the original report, whistleblower/saboteur (depending on whom you ask) Martin Tripp was fired from his position at Tesla’s Gigafactory in Storey County, Nevada. Musk sent a company wide email on June 17, 2018 saying they had been sabotaged; Tripp was fired on June 19. Three days later, Tesla filed suit against Tripp, seeking $167 million in damages. To top it off, someone from Tesla called the Storey County Sheriff regarding an anonymous tip they received stating that “Tripp was planning a mass shooting at the Gigafactory.” Police found Tripp that evening, “he was unarmed and in tears.”

Matt Robinson and Zeke Faux note in their Bloomberg piece that Musk took the opposite tack when dealing with this situation. Rather than “ignoring” Tripp while Tesla security investigated the incident, Musk publicly attacked Tripp and Business Insider report Linette Lopez, accusing her of being “on the payroll of short sellers” and working with Tripp to monetize Tesla IP. They also highlight that this “incident was the beginning of a social media meltdown so epic that the U.S. Securities and Exchange Commission forced Tesla to appoint a so-called Twitter sitter,” or, in-house legal counsel to monitor Musk’s online activity.

While the above is salacious enough, additional former Tesla employees have come forward to corroborate Tripp’s story: that not only had he repeatedly raised safety issues (including that damaged batteries were being installed in vehicles) with his supervisors, who ignored him, but also that the Gigafactory was generally a safety and security nightmare.

Sean Gouthro, who has now filed a whistleblower report with the SEC, was brought on as a Security Manager at the Gigafactory. Among Gouthro’s more astounding claims are that Tesla’s security operation had Tripp’s phone hacked, had him followed, and misled police. He also states that despite the serious nature and general safety implications of Tripp’s claims, they would have been low priority.

In the lead up to Tripp’s firing, Musk was frantically trying to ramp up production of the Tesla Model 3; to do that, the company went on a hiring and construction frenzy. According to Gouthro, there were a good number of employees living out of their vehicles at the complex. Some of these employees were discovered using cocaine and meth in bathrooms, and people were having sex in parts of the facility that were in the middle of construction. To top it off, he claims that due to the large influx of new employees they had severe issues with access controls, and, that he received calls from “local scrap yards…report[ing] thieves were trying to sell obscure electric vehicle parts.”

This is an ongoing story, and given the level of acrimony between the parties and the scrutiny of regulatory agencies in the matter, it’s unlikely we’ll see the end of this soon. However, these newer reports raise some interesting questions for security executives regarding the ethical implications of labeling potential whistleblowers as insider threats. Does your security team have an established protocol for handling safety or security issues brought forward by a whistleblower? What would such a policy look like at your organization?

Original Article: On June 17, 2018, Elon Musk, CEO of Tesla, sent one of the “All Employee” e-mails that every CEO dreads: The company had been sabotaged by a malicious employee. According to the e-mail, the employee had the access and ability to make “direct code changes to the Tesla Manufacturing Operating System” using one or more false usernames and exporting intellectual property to “unknown third parties.” In the communication, Musk suggests that this is what the employee had initially admitted to and may not have been the full extent of the damage.

The employee in question, revealed both by the Washington Post and in court documents to be Martin Tripp, claims that he was a whistleblower rather than a saboteur. Tripp states that he did not act because he didn’t receive a promotion but rather gathered data on rampant safety issues and waste that investors and customers had the right to know about.

Either way, Tripp and/or someone with access to sensitive areas on the manufacturing network was able to escalate user privileges to extract internal data and pass them along. Whether he did it for personal gain/revenge or for the public good is for the courts to decide.

Stepping back from the drama of Musk’s very public exchange in the wake of the revelations, the situation provides an opportunity to examine some best practices around dealing with an active or potential insider threat.

Recognizing Threat Actors Inside Your Company

The acceleration in the adaptation of newer technology has created a greatly expanded attack surface for security officers to defend against. Because executives and security officers operate from a position of trust with employees, most of the organization’s security resources are expended addressing external threats. This creates an overall lack of awareness of what is happening within the company, leading the security team to overlook any behavioral indicators that can help predict internal threats.

According to IBM’s Security Intelligence blog, some behavioral indicators to watch out for are:

  • Downloading large amounts of data to external drives or uploading them to personal cloud storage;
  • Attempting to access confidential data that are beyond that user’s needs;
  • Attempting to bypass security controls;
  • Working odd hours or attempting to access work facilities outside of normal hours;
  • Atypical social media use;
  • Using a mobile device or camera to photograph computer screens, documents, or sensitive locations in the facility;
  • Acting negatively or even abusively toward coworkers or management;
  • Changes in behavior in the workplace, such as suddenly becoming introverted; and
  • Excessive printing or scanning, particularly alongside “network crawling,” and/or downloading large amounts of data from the network.

Of course, this is only a truncated list of common indicators, and should a malicious insider have elevated privileges (either physical or cyber), it could be easy for them to cover their tracks. While some of these indicators are a bit harder to explain, such as photographing in sensitive areas, many of them can be brushed off with claims of ignorance regarding company policy or through human error.

Mitigating Insider Threats

As noted above, internal threats can be difficult to address because they tend to fly under the radar, only being discovered after the damage has been done. Therefore, one of the best ways to deal with the potential for malicious actions by employees, whether physical or cyber, is to get ahead of them.

For starters, you need to create a security culture in the workplace that actively encourages employees to speak up while simultaneously discouraging malicious actions. This comes down to developing a clear company policy and reinforcing it through active, continual training.

A security policy that is developed to deter insider threats should make your intentions clear, for the benefit of both the security team and the employees. By explicitly stating that there will be regular monitoring of abnormal network traffic (such as large downloads or off-hour printing), facility access credentials, or external storage devices (such as USB drives), you are making a potential threat aware that their actions will be monitored. At the same time, you are also raising awareness of common behavioral indicators among other employees.

Regular audits could also help mitigate an insider threat. Periodic reviews of facility and network access credentials can detect inappropriate privileges that may still exist from an employee’s previous job function or some other type of privilege escalation that needs to be changed.

A clear policy should provide ways to invest employees in the overall security of the organization. One easy way to do this is to provide workers with channels to anonymously report potential concerns directly to the security team. By allowing staff to report incidents without fear of recrimination from coworkers or management, they are more willing to come forward with pertinent information regarding internal threats.