An exploratory summation of keynotes comprised from Cybersecurity Awareness Month.
Every year in October, government agencies such as Cybersecurity and Infrastructure Security Agency (CISA) and National Cyber Security Alliance (NCSA), along with other organizations like National Initiative for Cybersecurity Careers and Studies (NICCS) partner with industry organizations to promote awareness for cybersecurity (NCSAM). Among which a toolkit was published and made available for organizations to raise awareness within there organizations.
The toolkit outlines theme and topic suggestions along with calendar content strategy possibilities and even gives potential messaging framework. Cybersecurity has been an increasing concern amongst many businesses and individuals. With the growth of new technologies and innovations, naturally these same inventions can be misappropriated to serve derogatory purposes that were otherwise unoriginally intended. In this same light, greater understanding can be learned that can help derive a high consciousness and realization of self.
With several hundreds of organizations participating within there respective ecosystems, the general applications of the insights gained involved employees and individuals increasing awareness of their digital footprints and the implications of their digital activities and devices that may have ripple affects trackable by offenders. This can be difficult as well as increasingly important as technological advances are being achieved at daunting speeds. Although this can almost be a seemingly impossible feet and admittingly scary for most c-level executives , this process of being more digitally aware is paramount for understanding our evolutionary pathway through expanding awareness and consciousness of our actions as a civilization and the respective ripple affect induced upon humanity.
Scaling the conversation back again in the scope of expanding awareness of cyber footprints, which is admitted nearly impossible by executives, organizations like Microsoft has implemented blockchain security metrics within their new system to assist in tracking of documents. Organizations can then integrate AI algorithms to identify the most probable threats and choose to investigate further if deemed necessary. In the article by Keara Dowd, BizTech provides more insight on this new AI technology-based approach to cybersecurity.
AI Automation in Cybersecurity
Some of the biggest threats that businesses face are internal, as it can be tough to track which employees have access to what information — and just as difficult to know what they’re doing with it.
To help change that, new security capabilities are being built into Microsoft 365, with a rollout planned for the end of the year. The software allows users to apply certain levels of security to certain documents. It then uses machine learning and artificial intelligence to track those documents and suggest ratings for similar files.
If something seems out of the ordinary — say a document marked “confidential” is transferred to a thumb drive — the system alerts administrators to the irregularity. The organization can then take that information and decide whether to launch a formal investigation. The alerts are customizable, so each business can input its own policies.
As potential threats grow, so does the amount of information that goes into defeating them. This is where AI and automation can play a key role.
“Azure Sentinel delivers limitless scale to support the growing volume of security data in your organization,” said Sarah Fender, principal group program manager for Microsoft Intelligent Security Graph.
The system includes data links for different security products and services, bringing all of that information into one central place. It then uses automation to take the next actionable steps, as established by the administrator.
Next steps include everything to blocking IP addresses and creating alerts to sending an automatic email to IT. When the incident is over, the program analyzes what happened so that the organization can adjust.
While threats may always be present, organizations can take steps to protect themselves. By building in tools to protect identity, utilizing AI and integrating solutions across systems, businesses can know their information is more secure.
Find more of BizTech’s coverage of Microsoft Ignite 2019 here.
To Read Full Story Click Here.
In addition to Microsoft’s efforts, other firms are seeking to understand the integration of AI and cybersecurity.
Those organizations with large access to big data have the responsibility of securing data but also not implementing security metrics that can be counterproductive. Also, amongst these threats are the most apparent fears as we enter into the future, organizations utilizing the clothe of security to gain more access to data and ultimately implement more systems that allow for more unmerited control.
Some experts say the need to maintain cybersecurity far outweighs the inverse implications associated with bias programed into algorithms. But does the cost of cybersecurity impede on individuals freedom and are there any preventative metrics that can be implemented alongside of new security protocol?
In an article by Forbes contributor, Bob Bruns the importance of AI to the cybersecurity industry and some important things we can do to maintain integrity while implementing these security measures.
AI and Cybersecurity Pros and Cons
As part of any good AI conversation, we have to consider the potential ramifications of an AI-based model. What are the true risks of harnessing AI to help defend ourselves in cyberspace? It is always possible to misuse the information a security system collects. It’s possible to program in unintentional bias. You could break things too much because AI told you to — or you could miss things because you trust your AI system to catch everything.
Yet as a business community, we must confront these risks and design to prevent these outcomes. The need for more robust cybersecurity is too great. We simply need to be thoughtful in our approaches, develop and use ethical standards around how we leverage these new and evolving technologies, and, finally, use a trust but-verify-methodology as we look to mature our multilayered cyber-defense strategies.
To do this, start by planning ahead and developing a framework for building AI that has pre-approved controls in place. Building human review into the decision-making process can go a long way toward preventing major issues. You can also leverage some of the work already being done to manage insider threats and apply that to controlling runaway AI. And finally, perform threat modeling, and spend time running structured exercises to identify gaps; then think about what sorts of controls are needed to prevent and detect abuse or negative impacts to critical business systems.To Read Full Story Click Here
In addition to these procedures, it is important to spend equal time expanding awareness and consciousness of the technology and implications aggregately; as it is arguable that we have no true means of understanding how AI technology can eventually lead to even greater opportunities of negligence and malpractice than ever before in history.
If we take adequate precautions and allocate resources to developing and leveraging AI to expand our overall awareness we can then apply this insight to better affective means to use new technology without manipulation. The very nature of cybersecurity is preventive procedures that correspond to overall levels of awareness. Most argue that once security is comprised there is little that can be done to reverse exposure to the security breach.
If organizations are to continue reducing the rates of cyberattacks they must place emphasis on understanding the most optimal integration of human awareness as it relates into AI technology to arrive that the most ideal outcome.
For more information on this topic subscribe to our insights publication, as we will continue to provide more relevant research and full length reports on innovations in cybersecurity and more.