Tuesday, April 25, 2023

Generative AI: No need to Fear!

In an interview in late 2020 I had spoken at length about the "Fear of AI". Now, I can see some of the same fears manifested in context of #generativeai (ChatGPT, Bard and others) . However, what I said for AI in general then, still holds true for generative #ai now:

----------------------------
First of all, this fear of AI is not actually a fear of AI per se. It is the fear that humans have towards anything new or different, especially on the technological side. It has happened through the ages from when humans first discovered fire. When the loom was invented in England, there were a group of people termed “Luddites” who went around breaking the looms because they thought that looms would take away their jobs as weavers. Horse cart drivers thought their jobs would go away when cars arrived. Yes, the jobs did go away but society evolved new kinds of jobs and new kinds of roles.

In the same continuum, now AI has taken that place of a new technology that we don’t fully fathom, and we don’t know what it can do to us, so people are scared of it.

Every piece of technology that has emerged over the years has helped us become better humans, or at least we have strived to use it in a way that helps us become better humans. I’m sure artificial intelligence will help us become better humans, and will expose newer dimensions of the human experience which we have not experienced so far.

When there were no cars, there was some dimension of speed and connectivity missing. Then when automobiles came about, we discovered that. That helped us get better. When airplanes came in, it made the world smaller. We may not be aware of what doors or avenues artificial intelligence may create right now. But that doesn’t mean that we should take the view that all of it will be bad. As an optimist, I believe most of it will be good, some of it will be bad. Every technology is a double-edged sword. But hopefully, ultimately, it will all work out for the betterment of society.
------------------------------------------

What are your thoughts? Is Generative AI something to be feared or is it one more thing that will complement human endeavors as many other technologies have through the ages?

Thursday, February 14, 2019

AI meets Fracking: Artificial Intelligence /Machine Learning in the Oil & Gas sector



First published in MyTechMag
The traditional components of AI – perception, decision-making/cognition and action map up quite precisely with the business problems the Oil & Gas sector deals with- collect and process lots of data (perception); where to drill? (decision-making); and how to drill and process most efficiently (action). This is my take as an outsider looking in.  
A couple of years ago I moved from Upstate New York to Texas. In a lighter vein that can be described as a move from “no fracking please, we’re New York” to “Drill, baby, Drill”. Time magazine described it well in a recent issue: How an Oil Boom in West Texas Is Reshaping the World . Oil and Gas exports are playing a key role in a resurgent US economy.
Living in Fort Worth, TX and knowing my interest in emerging and disruptive technologies I am often asked by fellow IT professionals whom I meet at various events – what are my thoughts on the role of AI in the Oil and Gas sector? Well, I cannot claim great expertise of the sector (i.e. if I totally discount time I spent hanging out with Schlumberger guys as a fellow expatriate in Vietnam in the early/mid 90s).
Let’s start with the traditional components of AI— 

perception,
cognition/decision making (very critical, machine learning can be used to make decisions much more effectively)
action
and see how that can play in the various segments of the Oil and Gas industry:

Upstream or E&P (Exploration) and Production sector:

This includes companies/technologies that extract crude oil or natural gas.

Role of AI:

o  Onsite operating costs can be reduced by using sensors and the Internet of Things (IoT) powered by AI to handle data collection and system control in real time
o  Eliminate costly risks in drilling by identifying best sites.
o  Well reservoir facility management

Midstream:

processes, stores, and transports crude oil, natural gas, and liquefied natural gas

Role of AI:

o  Real-time intelligent system with forecasting and optimization capabilities for better decisions and operating performance.
o  Oil and gas well surveying and inspections

Downstream:

includes oil refineries, petrochemical plants, petroleum product distributors, and natural gas distribution companies. 

Role of AI:

o  data model and predictive algorithms to develop software-based insights and solutions for the end-to-end management of water and other consumables. 
o  streamline their refinery and petroleum delivery operations to accelerate revenue growth

There are certain aspects where AI has a role to play across the entire value chain:

  • AI can help oil and gas companies’ lower costs and make more accurate decisions.
  • Planning and forecasting
  • Predictive maintenance

Like any other sector, there will be a wrong way and a right way to go about doing this

The Wrong Way

  • We've got massive amounts of data. There must be some value in it. Let's use AI to figure it out.
  • Can a computer replace the brains of all my employees and do more efficiently what they do?
  • Can I get some AI "Magic Dust", sprinkle it on my organization and make it smart?

The Right Way

Let’s play it through with a representative use case :

Work with a problem (e.g. “As wells become more complex, logging and LWD (Logging while Drilling) measurements become more challenging to obtain”)

Just write down the solution you’d like to have. (possibly a capability to provide reservoir properties without running expensive logging tools).

then work backwards and figure out what kind of automation might support this goal (AI driven predictive model?)

break the problem down into the traditional components of AI—

  • perception (use gamma ray logs and drilling dynamics data (ROP, weight on bit, torque, etc.) for many domestic and international basins.)
  • decision-making (ascertain predictive accuracy and repeatability of the AI-driven logs vs. actual logging with associated tools costs using static and adaptive models)
  • action (provide the operator with critical formation data for these horizontal wells that would have been cost-prohibitive had the well been logged using conventional methods.)

map those onto different parts of the business problem: 

  • validate AI models constructed for various formations perform within the expected parameters vis-à-vis blind tests;
  • quantify savings delivered to the operator on data acquisitions costs;
  • Ensure AI-based logging data were delivered to the operator on a daily basis to assist with geosteering, drilling and completion decisions while drilling operations were ongoing

then work back to whether there’s the data you need, and how you collect it (a framework for planning a data acquisition program using conventional logging tools.).

In conclusion, I think there is tremendous opportunity for leveraging the entire continuum – AI, Machine Learning, Deep Learning and Data Science in the Oil and Gas sector to drive efficiencies even as we remain mindful of the following caveat:
AI is about using math to make machines make really good decisions. At the moment it has nothing to do with simulating real human intelligence. Once you understand that, it kind of gives you permission to think about how a set of data tools—things like deep learning and auto machine learning and, say, natural language translation—how you can put those into situations where you can solve problems. Rather than just saying “Wouldn’t it be good if the computer replaced the brains of all my employees so that they could run my company automatically?”
-Andrew Moore, Google Cloud AI

Wednesday, April 18, 2018

Digital exhaust royalty payment: will it solve some of Facebook and Google’s personal data usage travails?


Has the time come now for users to be paid royalty for use of their “digital exhaust” by corporations like Facebook and Google? Will a digital exhaust royalty framework be win-win for both: the corporations whose business model is based on the monetizing of this exhaust; and the individuals who are increasingly wary of this exhaust being put to wrong use without their knowledge or permission?

I like to make beginning of the year predictions about what technological innovations will come to play in the New Year. Some I get right, but yes, some I do get wrong. One such prediction which did not come to fruition was the one I made in 2015:
‘Pay me or else stay away’
Some capabilities for monetizing the digital exhaust created by individuals will emerge so that they can track and receive "royalties" for use of the same by corporations; and some of their privacy concerns can be mitigated ("pay me if you want to use my location data or online purchase history”)
But now as the Facebook/Cambridge Analytica scandal plays out and with GDPR looming on the near horizon, I think it may finally be time for the idea to come to life.
Users have gotten used to the “free” services which Facebook, Google et al provide. But everyone knows at the back of their mind that given the multi-billion-dollar revenues these companies are generating, nothing is actually free. The users’ data is being bundled and sold to marketers and the like so that they can do targeted messaging. The data is as likely to be used by a product marketer for targeted ad placement as it is by a political party for focused political messaging.
The core item at play is our “digital exhaust”. McKinsey in 2013 had talked about organizations mining “exhaust data” (Competing in a digital world: Four lessons from the software industry):
In addition to creating new revenue streams by amping up traditional product and service offerings, organizations have been mining “exhaust data”—information that is a by-product of normal business operations—for use in developing new products. Such by-products, for instance, allow credit-card companies to monetize transactional data from cardholders by analyzing and selling these data to merchants.
But since then the exhaust has amplified. All our actions are generating “digital exhaust”: every credit card swipe, every click of a smart television remote, every google search, highway cameras, phone records, medical history, social media likes, every purchase transaction etc. All these are available for companies to bundle and market to the highest bidder.
In fact, I would say companies like Facebook and Google are in the business of monetizing digital exhaust. And that’s not a bad business to be in per se. They figured out a way of monetizing something and providing something of value to us in return (say search or socialization or navigation) and in the process generating even more digital exhaust for them to monetize.
The business model as it exists now allows these companies unlimited liberties to use the exhaust and may have become increasingly lopsided in their favor as their burgeoning revenues indicate. Is it time now for the pendulum to swing back a bit now towards the users who are generating the digital exhaust in the first place. This brings me back full circle to my original premise – Has the time finally come now:
“for monetizing the digital exhaust created by individuals so that they can track and receive ‘royalties’ for use of the same by corporations.”
Having a well-defined royalty framework can help alleviate privacy concerns. People will be compensated for use of their information and they can track who or what it is being used for. A user can choose what kinds of uses his/her data can be used for: viz. will accept royalty for use by marketing companies but will opt-out for usage by political targeting.
A royalty framework already exists for music, entertainment and publishing industry. Royalty accrues to recording artistes or performers every time their music is played on the radio or show airs on the television. Now I am talking about evolving that framework for the digital era where every person is an artiste, their digital exhaust is their music creation and use by another party is akin to the song being played on radio.
Pipedream or reality?
I think technology and public opinion is evolving to make such a model feasible:

Technology

Blockchain could emerge as a way of tracking the use of a person’s digital exhaust.
This will be an extension of a trend that is already emerging in the entertainment industry: Is Blockchain The Answer To Better Royalty Accounting and Payments? What this article proposes for music royalty can very well apply to digital exhaust royalty too:
Put simply, the blockchain format can embed all the necessary accounting, usage rights, and creator credit information right into the song file itself. That means wherever that file goes, the information needed to know who to pay, how it can be used, and more travels along with it.
All digital exhaust “transactions” for a person can be tracked in a blockchain ledger and royalty revenues accrue in a crypto-currency like bitcoin.

Public Opinion

That may sound controversial right now but public opinion seems to be shifting (see LA Times: “Conservatives, liberals, techies, and social activists all love universal basic income: Has its time come?”).
Let’s take the Universal Basic Income premise a bit further. What if in a robotic era, the digital exhaust which we now consider a byproduct becomes the actual product which most humans create. If producing digital exhaust is the “job” then shouldn’t humans be paid for doing that? Could the Universal Basic Income rather than being a dole, be a fair wage for the job being performed- creation of digital exhaust?  
Much needs to happen before a Digital Exhaust Royalty framework becomes a reality. But the question is not about why or how but when. What was a pipedream in 2015 may very well become a reality by 2020. What do you think? Stay tuned and join the conversation.
Originally published on CIO.com ((Read this and other posts at SETHspeak)

Friday, January 5, 2018

Thank You and Happy New Year 2018 - Friends, "Followers" and Contacts!



2017 has come to a close, like any year it brought its share of joys/sorrows, successes/failures et al. 
But I am sure we emerged wiser and stronger, hopefully none the worse for being through the wringer called 2017.
2018 - awaits over the horizon with a shining glow!
Wanted to end 2017 with a warm and sincere THANK YOU! Thanks for reading my blogposts, commenting on them and sharing them with your own networks.
I am humbled by your feedback and feel privileged to be part of this virtual community of ours.
Many of you I know personally as our paths have crossed on our life and career journeys and some of you I have met via LinkedIn only. Wish all of you and yours the very best for 2018!
What a journey we went on in 2017 - Innovation, Data, Driverless Cars, APIs, Talking Toasters and Listening Copiers et al. Hope you found these useful and enjoyed reading them as much as I did writing them: 

CIOs, listen up: voice recognition meets the printer!

Got SD-WAN?

Why CXOs need to jump off a plane

The CIO's data dilemma: The paradox of plenty?

Cloud vs. clouds: A CIO’s conundrum

The CIO and the driverless car: Are you ready for the Transportation as a Service (TaaS) revolution?

5 Leadership Lessons from the Indian Meltdown (ICC Champions Trophy Final- Cricket)

My Copier/Toaster Knows I Am Angry!: Building Emotion AI in Devices.

Healthcare Sector and the API Economy

3 Disrupters* for the Copy/Print Industry

Let's stay engaged in 2018. You can also read my CIO.com blog: SETHspeak, follow me on Twitter: @setdeep and enjoy my daily quotation share in your LinkedIn feed: #DeepaksDailyQuote.
Thanks again - you are a big part of what makes each and every day an exciting and interesting one for me.
Stay Safe, Stay Positive, Keep Dreamin' .......and be ready to greet and live 2018 with the gusto it deserves!

Thursday, December 21, 2017

CIOs, listen up: voice recognition meets the printer!

First published on SETHspeak (CIO.com)
Voice/speech recognition is increasingly a part of the new technology paradigm the CIO is confronted with. The sector is rapidly evolving and what was hitherto considered a home-fad is rapidly making inroads in the office environment. Google, Siri, Alexa, Cortana, et al, are all very much upping the ante to bring voice control to the office of the future.

AWS:reinvent 2017, held November 27 to December 1, 2017, lived up to its promise as “AWS kept the announcements coming at a frenetic pace this week.”
Amongst the ones that interested me a lot was “Amazon is putting Alexa in the office.” The reason that it piqued my curiosity was that amongst the first custom skill use cases that Alexa for Business touts as part of its Alexa Skills Kit is related to printers in an office environment (full disclosure: I work at Xerox):
“For example, you could build a skill that lets a user report a printer problem to IT, and the skill could use the device location so that IT knows which printer is broken.”
Reminds me of how Andy Slawetsky described what he saw at a Xerox new product launch event earlier in the year:
Donna Davis of Vision-e showed us an app they built that adds these new Xerox copiers into the Amazon Alexa ecosystem. Walk up to the machine and tell it to color copy 50 pages, two-sided and stapled and watch it go. I saw it work. Those of you that have an Alexa get how powerful this is. Those of you that don’t, can’t appreciate it properly until you try it. While it may look like a gimmick or just cool “thing,” it’s much more. It’s a true time-saver around the house, adding things to grocery lists without interrupting what you’re doing, playing music, giving you the news and weather, looking things up for your, etc.
Take that mentality to the office and ask yourself how much easier would it be for customers to walk up to the MFP, drop the documents in the feeder and say “email these to Andy,” and it just does it. “File these invoices” and they wind up in the system. There’s some real opportunity here. Or authenticate with voice instead of a card you need to swipe every time. And guess what. Xerox didn’t think of it. Xerox enabled it. This came from a dealer’s development company.
Not only has Xerox created a platform that enables their channel partners to customize the products while building net new recurring revenue streams, they’ve gone further by allowing customers to develop their own apps. And this is what could put the MFP at another level.
Check out the cool video of Alexa voice recognition interacting with the Xerox Multifunction printer.
Gets even more interesting even more interesting when in addition to executing direct commands like "print copies" the voice agent uses its net connectivity and AI to actually find and deliver output "Alexa, can you print me the outstanding task list for my project" or "Alexa, print me a list of all vendors providing XYZ services." And then in the next incarnation both the voice agent and the printer become part of a seamless AI driven office workflow.
Voice assistants have moved over from being just a fad to a legitimate office efficiency enhancement tool with the voice-recognition market estimated to be over half-a-billion-dollar industry by 2019.
GoogleSiriAlexaCortana are all very much upping the ante to bring voice control to the office of the future.
Alexa getting into the office is not a great stretch of the imagination since starting from the home she had already made her way to our cars (You'll soon be able to start a Ford via Amazon Echo) and so it was just a small little hop, skip and jump to get into our offices.
What does all this mean for you as the CIO or a technology leader:
Be ready! Alexa, Siri et al may be in your workplace soon.

Search Google

Google

Site Meter