I recently presented findings from my Workload Automation (WLA) research and other EMA research on a webinar with Tim Eusterman, Sr. Director Solutions Marketing at BMC in a webinar titled “How Digital Business is Shaping the Next Wave of Automation”. The recording of the webinar is now live here, and the slides are available here.

A number of questions were asked that we did not have time to address on the live event. These are presented in this blog post.

Q: Can you elaborate on what you mean by IT bringing a digital vision to the business folks?

A: Don’t wait for them to ask, or worse, go off and do Shadow IT things. If you see a problem or opportunity, formulate a suggestion to make it better through automation or new digital services. Be aware of what your company’s competitors are doing, or what others in similar industries are doing in solving similar problems. Basically, you are the technology expert, but your business counterparts are living in a world where consumers are very sophisticated as to what is possible, so you need to be proactive, not reactive, to remain relevant.

 

Q: One of the slides showed a big decline in the concern for Workload Automation integration with Big data tools. What are some examples of improvements that have caused this to be less of a concern?

A: There has been an increase of awareness of the issues in using Hadoop and its native tools for scheduling. As a result, there is greater integration between the WLA enterprise class scheduling tools and Hadoop tools to take advantage of better scheduling capabilities within the enterprise schedulers. To handle the many differing sources of data, capabilities around automated file handling and managed file transfer features have been improved, along with support for non-sql data bases. WLA products now have more robust API’s for custom integration and some include the ability to let coders control scheduling from within their code. This has resulted in better integration with other Business Intelligence and Analytics tools beyond Hadoop, and brings to bear the more sophisticated scheduling best practices, oversight, and features to bear on these workloads.

 

Q: I’ve not heard the term before, can you explain a little more what Jobs-as-Code is?

A: Jobs-as-Code is a DevOps philosophy focused on helping increase the speed of delivering high-quality applications.  Many DevOps teams don’t realize how much time they lose when moving applications into production because of basic app “plumbing”.  Because developers use simple tools and scripting to code basic instrumentation (jobs) as they build apps, hardly anyone uses the same tools or adheres to the same standards. The result is that hodge-podge of job workflow code gets passed with the application to Operations and doesn’t meet production standards, requires manual integration, and is very hard to troubleshoot, etc.  Jobs-as-Code means shifting left by including jobs as artifacts in your DevOps delivery pipeline.  These artifacts that define jobs can be built using a familiar, code-like notation, stored in an SCM together with the code that implements business logic, built together with that code, tested together, promoted from environment to environment together and eventually deployed together; with the same level of automation. This ensures that your enterprise job scheduling tool can accept the application with no rework or manual intervention.  The functionality embedded in a Jobs-as-Code approach should include:

  • Sophisticated flow relationships
  • Extensive application integration to support all your platforms and technologies
  • Operational insight into execution status and progress
  • Output and log collection
  • Support for service level management and business-level abstraction and full security
  • Audit and governance compliance
  • And others

With such an approach, your organization can focus on producing innovative business services rather than divert precious developer talent to building operational plumbing which is unlikely to reach the level of sophistication obtained with a market-tested, enterprise application job scheduling solution.

 

Q: To what extent is digital disruption likely to transform businesses and jobs in next 5 -10 years?

A: It is already happening and will transform every business in every industry. You can see the disruption all around you now. It is a roller coaster that every company is on. It impacts the ability to compete going forward around the digital customer experience and has an impact on service levels.

This will change the self-service model even further. Business will change in customer facing interactions, interactions with vendors and suppliers, even interfacing with employees. It will all become more self-service through digital interfaces and further speed up everything. It exposes more information, both good and bad. It empowers end users and customers and can greatly impact loyalty, positively or negatively depending on the experience.

Companies will need to become more 24/7 oriented, and lean more heavily on automation, AI, machine learning, and cognitive systems to present a positive experience.

The effect on jobs is beyond the scope of this webinar.

 

Q:  IoT is very intertwined with Performance and Security. How does IoT impact automation where workloads are placed?

A:  I presume the question refers to IoT in this case as data.  This is data typically collected at the edge of the network from machines or electronics with sensors, actuators, and software to transmit the data in some way.  Collection or aggregation of IoT data is done in many ways depending on the network, applications and storage technologies – from on-premises to multi-cloud.  The common thread for workload automation is dealing with the disparate nature of this data including the volume, velocity and volatility.  To properly move the aggregated data from any application or source to a point of collection or processing, such as a Hadoop data lake, and then take the ‘processed’ data and move it to the right place at the right time, an analytics or dashboard application as an example, requires a few important capabilities.  One is the ability to visualize, schedule and sequence all the interconnected data movements regardless of the application, operating system or other infrastructure on which the data resides.  The second is being able to calculate and predict the overall status of the overall work, identify when certain ‘jobs’ are running slow that can impact the resulting SLA, and remediate that on the fly.  And finally, provide a level of security, logging and audit capability to enable the business to maintain its overall governance and compliance requirements.  So in this sense, from a performance and security perspective, IoT data workloads are no different than any other company data in terms of security and performance needed to run the business.