The shift to public cloud, private cloud and SaaS has become so widespread that now, the vast majority of workloads are in the cloud. There are many factors that have caused this shift, from the increased demand for remote work, lower cost of deployments, and fast scalability to support dynamic business needs.
Like an orchestra conductor, DPC tools coordinate various existing IT tools to provide a unified set of capabilities for managing infrastructure.
An $80-million fine is enough to get any CIO's attention. And that’s precisely what financial services provider Capital One got whacked with August, 2020 after the U.S. Office of the Comptroller of the Currency levied in the wake of a massive data leak propagated against the company’s Cloud infrastructure. “The OCC took these actions based on the bank's failure to establish effective risk assessment processes prior to migrating significant information technology operations to the public Cloud environment and the bank's failure to correct the deficiencies in a timely manner,” wrote the OCC in the press release. The fine was the penalty for a 2019 incident where a former Amazon Web Service (AWS) employee leaked a treasure trove of sensitive customer data online after Capital One failed to properly secure an AWS S3 storage bucket holding millions of credit card applications.
How many times have you heard someone say “If you don’t know what assets you have, how can you manage or secure them?” This saying, or some form of it, has been around for years. The first time I heard it was well over 10 years ago, when I was speaking with the CIO of an enterprise. She told me she did not confidently know what and how many assets were in her environment and that other CIOs she speaks with have the same challenge.
For many CIOs and their IT teams, audits are a painful inconvenience. Traditionally audits involved building detailed spreadsheets with data manually collected from various IT asset management systems. A modern technology management platform can transform this painful, error-prone manual process into a powerful way to reduce an enterprise’s attack surface.
I have been selling technology to large businesses longer than I’d like to admit and I cannot envision myself in any other occupation. In fact, sales is the best profession on the planet, and it is a vocation I feel lucky to be a part of. I will confess that playing shortstop for the New York Yankees is my dream job, but until the Yanks return my calls, I will continue to put sales on the top of the list of the best and most rewarding careers.
As the pandemic continues to linger and disrupt, enterprises are settling into a new operational model that has work-from-anywhere as its core driver. Employees and their technology will continue to work in either a fully remote, or at best a hybrid IT model. And this is in addition to other disruptive constants like migration to the Cloud, or expanding use of IoT. The scope of managing IT infrastructure is becoming both broader and deeper, while still remaining mission-critical. The breadth of requirements can lead to analysis paralysis, which can often be mitigated by a relatively short and focused list of what we have found to be best practices.
Enterprises have been advancing IT service delivery capabilities for quite some time, gaining intelligence about services and respective asset dependencies that affect the services, service level objectives (SLOs), and experience that support their business. Organizations continue to invest heavily in IT Service Management (ITSM ) functionality in terms of ticketing automation and maintaining the Configuration Items (CI) that comprise their Configuration Management Database (CMDB ). But these systems were not designed to encompass the volume and breadth of technology and business processes that CIOs and their teams must see, manage, and optimize.