Tuesday, September 12, 2017

The CIO's data dilemma: The paradox of plenty?

Classical data strategies have focused on "what the data looks like" and can it provide "answers." Newer approaches provide the ability to move beyond that to "how can I use what I have" and can it provide "directions."

Originally published on my CIO.com blog: SETHspeak
"Data, Data, everywhere, Nor any information to think"
-Paraphrasing Samuel Taylor Coleridge's famous lines from the Rime of the Ancient Mariner.
Often at time it does feel like we are in a "paradox of plenty" kind of situation, somewhat akin to a resource curse where historic corporations with an abundance of data are finding themselves losing the race of market competitiveness to newer players who have much less data. 

Why?

My initial thoughts were that till recently the focus of most corporations had been on mining their "historical" data.
However, with the world of today generating a steady and ever-growing stream of "real-time" or "near real-time" data, corporations need to wake up to the new reality that much of their historical data is not as relevant or valuable as they think it is.
In the absence of real-time data, historical data is often used as a proxy to make some predictions. But with real-time data being available now, that proxy is no longer needed or is no longer as relevant.
This has a big benefit – corporations that feel that they had fallen behind in the race to mine historical data do not necessarily need to play catch-up. They can make up for the lost opportunity by creating a framework to leverage real-time data streams.
Essentially, corporations can leapfrog and catch up with or even move beyond other players without getting caught up in what I'll call the legacy data trap – ditch it, since most of it may not be as relevant as you think. Food for thought?

What does the Data Doc think?

I bounced this idea off Tom Redman, "the Data Doc." He was skeptical. While, he agrees that companies need to wake up, he had two reasons for his skepticism. 
First, real-time and historical data support different sorts of analyses and opportunities. He did not see one as a surrogate for the other.
Second, the biggest "gap" is the ability to analyze data and sort out what to do with those analyses. Real-time data does not address that gap.
Tom made some great points.

My response

Till now most of the energy and resources of corporations were devoted to "historical" data, since the capabilities to harness real-time or near real-time data did not exist. Now suddenly there has been an explosion in both the volume of the real-time data as well as the tools to manage it.
As a result, there will be a shift of attention and resources from historical to real-time since both attention and resources are fixed and limited. Also, for many areas, an effective handle on real-time data is all that may be needed.
For example, we drive on the roads just using real-time data presented on the dashboard (speed, rpm, engine temperature) with no need of any historical data to meet the immediate need of going from point A to point B.

What do you think?

This could be an interesting survey question to ask CIOs and CDOs:
Of your total data management spend how much will you allocate to mining historical data vs. managing real-time data and why?
This may offer some interesting insights on how this entire area is evolving.

What implications does all this have on data strategy?

  • Exact vs. Roughly Right: For historical data, the emphasis on getting all data in the right formats, with right definitions and in common data stores, needs to go. Such an approach has led to the mental and execution block that no meaningful insights are possible till considerable time and resources are spent on getting it all "right."
  • Consolidation vs. Federation: Approaches where data is pulled from various data sources into a single repository need to be replaced by approaches where data stays in its parent repositories but gets "pulled" as needed. A federated data application framework? IBM Watson Discovery Service does something like that but seems like it does it only for unstructured data. Fraxses seems to do it for both structured and unstructured data. With the kind of capabilities available now, physically moving data into a distinct data store (lake) may not be required. The lake may be virtual. This may be a quicker approach too.
  • Internal vs. External: In most corporations, data strategies have been inward looking. That is, they have focused on internal data. In today's world, any meaningful data strategy has to focus on internal as well as external data. How can you combine internally available data with publicly available or acquired external data to deliver business focused insights is a question the strategy needs to answer.
  • Defense vs. Offense: Data strategy should enable support of both "exact" reporting (e.g., for finance and accounting purposes) as well as "directional" reporting (e.g., for strategy and business development purposes). Till now the focus has been on exact, which has meant all available data has not been effectively utilized. There is always a significant amount of data which is not "exact" but can still provide meaningful insights when weighted appropriately (e.g., Watson when playing Jeopardy did not come up with just one correct answer but several with appropriate weights). A recent Harvard Business Review article, "What's your data strategy?" described it as defense vs. offense: Companies make considered trade-offs between defensive and offensive uses of data and between control and flexibility in its use. Leandro DalleMule and Thomas H. Davenport summed it up well in that article:
There is no avoiding the implications: Companies that have not yet built a data strategy and a strong data-management function need to catch up very fast or start planning for their exit.
(Originally published on my CIO.com blog: SETHspeak)

Thursday, July 27, 2017

Cloud vs. clouds: A CIO’s conundrum

Beware of cloud "stickiness." Plan for redundancy. Clouds do fail!

Originally published on SETHspeak , CIO.com July 13, 2017
There was a tear in the fabric of the cloud universe on February 28, 2017 when Amazon Web Services had a significant outage.
It highlighted what some described as a “critical lack of redundancy across the internet.” The outage was a wake-up call for many to build in redundancy (both multi-region as well as multi-provider) in their cloud strategy.

Can I jump from cloud to cloud?

So from cloud the conversation has moved to clouds.
However, good tools with capabilities that allow seamless dynamic interoperability between public cloud providers especially in the PaaS and SaaS space are hard to find.
Public cloud providers' idea of redundancy is to provide geo-redundant storage, replicating data to a secondary region that is hundreds of miles away from the primary region. They talk about seamlessly migrating from one to another in quick easy steps but are silent about dynamic real-time interoperability.

Dynamic real-time interoperability (DRTI – Voila! A new acronym is born)

What I mean is that, ideally, these tools should allow dynamic switching between public cloud providers based on rates, availability, etc.
Customers are getting wary about putting all their eggs in one basket. They would also like to leverage differential rate structures across public cloud providers to their advantage.
What one really wants: A cloud load balancer/scheduler that switches the processing to whichever cloud offers the best mix (rate, availability, processing speed, etc., at that point of time). Cannot say there is one quite there yet.
Interestingly a patent assigned to Red Hat Inc., "Methods and systems for load balancing in cloud-based networks," has been out there for quite some time. But does not seem like it has been effectively commercialized yet. 

Why is my cloud so “sticky”?

This brings me to the other pet peeve about the existing public cloud providers: “stickiness.”
The economic model presented by leading public cloud providers highlights a “consumption-based infrastructure,” which moves the cost model from CapEx-centric to OpEx-centric, and aligns costs directly to usage (see "Pay-as-you-go IT: CFO’s dream, CIO’s nightmare/opportunity?").
However, just like the classic software/hardware vendors, public cloud providers have tried to create a “stickiness” using technology or commercial barriers to make seamless interoperability across multiple providers somewhat of a challenge. (Oh, AWS Cloud is the AWS Cloud and Azure Cloud is the Azure Cloud, and never the twain shall meet – with apologies to Rudyard Kipling.)
In SaaS, vendors are perhaps getting too “sticky,” like to own the application, platform, infrastructure, cloud – the whole shebang! 
That works fine with me when I am the vendor for others, but being at the receiving end of the stickiness is scary.
Perhaps, it could be reduced a bit for some applications where other vendors are pulled for the underlying platform or infrastructure (Oracle SaaS or Salesforce on an AWS cloud). But all applications vendor claim their applications work best on their own cloud!
I do not blame them for that. Any vendor in any line of business would like to make sure the customer stays with them and goes no place else.

The future is cloudy!

If I had access to the ear of Jeff Bezos or Satya Nadella or Sundar Pichai or Larry Ellison, this is what I would whisper: Build the cloud “switch.”
That means a tool/appliance/service that allows storage capacity or processing loads to move from cloud to cloud or cloud provider to cloud provider based on cost and demand.
Create a true “cloud grid” like the electricity grid (where distribution companies can draw power from whichever generating company based on rates and availability).
What next: a “computing exchange” where processing and storage capabilities are traded and futures are locked in like any other commodity (electricity, jet fuel, etc.). Now, I may be getting ahead of myself.

Don’t forget the R word

Well, in the short term:
  • Think redundancy
  • Plan redundancy
  • Execute redundancy
You are only as good as your plan B.
Originally published on SETHspeak , CIO.com

Monday, July 10, 2017

The CIO and the driverless car: Are you ready for the Transportation as a Service (TaaS) revolution?

                                Credit: Volkswagen
Originally published on CIO.com
Well, not many of us have seen a driverless car yet (The closest I have come to one is the Tesla with its many autonomous driving features). But it could very well be a “platform disrupter,” which throws us off track if we don’t prepare for it.
In my comments published in the Harvard Business Review I had opined that failure to recognize platform disrupters (and the self-driving car is one) can be very detrimental to corporate health and existence. (......by focusing on the downstream disrupters and failing to recognize these Platform Disrupters, companies are missing the woods for the trees.)

Are you kidding?

That’s what some of you may ask. 
How can an autonomous car or driverless car or self-driving car or whatchamacallit impact my company and above all disrupt IT?
Combine facets of the “sharing economy” with it (think Uber, Lyft) and you have a veritable TaaS (Transportation as a Service). And that can be a game changer.
For starters, the study: Rethinking Transportation 2020–2030 (from RethinkX, an independent think tank that analyzes and forecasts the speed and scale of technology-driven disruption and its implications across society) highlights the impact Transportation as a Service (TaaS) is likely to have on entertainment, work and other opportunities: 
Americans spend around 140 billion hours in cars every year, a number that will increase by 2030. The TaaS disruption will free up time otherwise spent driving to engage in other activities: working, studying, leisure options and sleeping. 
This will act as an increase in productivity and provide a boost to GDP (see Part 3.5). From the TaaS provider perspective, additional services could be offered, such as entertainment (movies, virtual reality),  work services (offices on wheels)  and food and beverage (Starbucks Coffee on wheels). 
Providers could act as distributors, earning revenues via a range of business models, including a percentage of sales generated on their platform (as in the Amazon and Apple stores), advertising revenues from onboard entertainment (similar to the Facebook and Google AdWords models), or the as-yet undeveloped business innovations that are likely to arise from the TaaS disruption. 
  • Car being so comfortable that people spend more time in it rather than less
  • Think of it as workplace of the future: work in there, pay bills etc.
  • Touch screens all around (“immersive”); 
This space is evolving very fast. Ford has appointed a new CEO who had just recently been brought in to head up its “smart mobility” operations. An indication that the big players are acknowledging the direction the industry is heading. Cars are no longer transportation but part of a larger “smart mobility” initiative.
It is this “offices on wheels”/“smart mobility” premise which should get companies thinking. 
Our cars of the future may be evolving as an extension of the workplace/office.
Do all the auto manufacturers see that yet — perhaps not, they suspect it, but do not seem to be sure how to factor it into their designs. That’s where other players could step up, take the lead and create/define a market.
All TaaS cars may not be offices but in big cities customers in the future perhaps could request a ride in a vehicle equipped with office capabilities. The jury may still be out on Uber’s Pittsburgh driverless car experiment, but it does show the shape of things to come.

What does this mean for the CIO?

As employees start treating the car as extension of their offices IT infrastructure teams would need to figure out best ways of implementing “telepresence” in cars assuring seamless connectivity. The autonomous car could also very well be the epitome of IoT/Edge Computing.
There would be major implications for information security as well. The driverless car with its ubiquitous connectivity with the external environment and other vehicles on the road multiplies manifold the threat vectors compared to the current physically static office environment.
The question you may need to answer in the near future: is your business ready for Transportation as a Service? And what can IT do to facilitate that?
A few thoughts:

For the business leaders, especially if you are in a business sector even remotely involved with the “office”:

  1. Preliminary assessment of demand potential for office capabilities in driverless vehicles.
  2. Engage with vehicle manufacturers – Ford, Toyota, Tesla, Navistar, etc.; Driverless capability service providers – Uber, Lyft, Google etc.; Automobile accessory companies – Harmon-Kardon, Pioneer etc.To understand their vision of the “office package” for the driverless vehicle of the future.
  3. To “sell” them the need to include office capability as part of their “office package” offerings

For the CIO

  1. Engage with business stakeholders to ascertain where the “driverless car” or TaaS fits in their business plans/vision for the future and associated timelines thereof.
  2. Ascertain the requirements and gaps from an infrastructure and security perspective for extending the office to the car.
Get ready for the ride. It will be arriving at your doorsteps soon.
Originally published on CIO.com

Search Google

Google

Site Meter