Interview with Dejan S. Milojicic on top technology trends and predictions for 2019

Published January 23, 2019   |   

Recently, I interviewed Dejan S. Milojicic, an established researcher, technologist, and former president of IEEE Computer Society, the world’s premier organization of computing professionals.

Milojicic is a distinguished technologist at Hewlett Packard Labs, Palo Alto, CA (1998). He was a technical director of the Open Cirrus Cloud Computing Testbed, with academic, industrial and government sites in the US, Europe, and Asia. He has published two books and 180 papers; he has 31 granted patents. He is an IEEE Fellow (2010) and ACM Distinguished Engineer (2008). Milojicic was on 8 Ph.D. thesis committees and taught Cloud management course at SJSU. As president of the IEEE Computer Society (2014), he started Tech Trends, the top viewed CS news.

Dejan S. Milojicic

Dejan S. Milojicic, an established researcher, technologist and former president of IEEE Computer Society

Read the complete interview below:

Thank you for accepting our interview request. In fact, I have plenty of questions to ask, but since our portal, Big Data Made Simple specializes topics on Big Data, Business Intelligence, Artificial Intelligence, Machine Learning, Cloud, Data Science, and related technologies, I prefer to limit my questions only to those areas, at this point at least! Shall we start with the methodology being used by IEEE Computer Society to accurately predict and identify top technologies that have substantial potential to disrupt the market every year? How does it work?

We have a number of approaches that are more or less scientific and/or ad hoc. One approach we adopted was to do a search on some of the top themes inside of the IEEE Computer Society Digital Library (CSDL) and IEEE Digital Library (Xplore). There are prototype tools that enable us to see the trending. We also compare with the searches we other relevant organizations’ libraries (ACM, Google Scholar), as well as with Google search. There are also a number of other organizations that do similar predictions, although they usually take place later in the year and they have different goals and time horizons.

However, the complementary background of the team offers a plethora of experience that is unmatched in these assessments: nothing can beat good judgment. This is especially true because we are talking about the likelihood that technologies will gain substantial adoption during the next year. This is not about popularity, so many subtle aspects need to be taken into account. Nevertheless, we are following a simple model where every one of the ten members of the team can recommend a small number of technologies (2-4) that individual is willing to champion. Each proposal is accompanied by a brief description. Then everyone votes with ten votes (not all need to be used). Once all votes are in, and technologies ranked, we look at the winners and do another round of merging, reprioritization, elimination, etc. So votes are really only a starting point for the discussion. Again, the dialogue and subtle aspects are taken into account. Once we all agree with the technologies and we select that everyone is comfortable with, the original champions describe selections in a bit more detail.

The important aspect in selection is the experience from past years. Some technologies met expectations, others didn’t and some barely did not meet expectations. The last ones are the likely candidates for being selected again these years. At the end of the year, we grade ourselves and the annual grades are taken into account during next years’ predictions. For example, our 2018 grades played an important role for predictions for 2019.

IEEE Computer Society’s 2019 forecast suggests that technology has reached a point where it can help resolve societal issues and improve our lives. Artificial intelligence, machine learning, and robotics have already proved very useful; yet, there are serious concerns from technology leaders, pioneers and renowned personalities who warn that they can be an existential threat to humanity. Currently, there is a gap between technological advancements and the mechanisms intended to regulate them! And the gap is growing wider. How do we reinvent the regulations and what are the key principles for regulating emerging technologies?

There are at least three approaches that come to rescue: ethics, regulations, and best practices all alleviate some of the problems in their own ways. IEEE Standards Association has been working for a while on Ethics in AI, more specifically on ethically aligned design. Reports are available from their Web site. The major learning is that for AI to be ethical it has to be built into the design of the AI. I strongly recommend reading the report, there have been at least two versions released.

However, without regulations (laws) nothing will be possible. IEEE organized an event that touches on this theme holistically. The event is called Confluence of AI/ML and Cybersecurity and report were created as a result of the event. The report, among other topics, addresses the regulatory compliance required for AI in support of cybersecurity and vice versa, how to make AI secure and safe for humans.

Finally, best practices are always the last and most important factor, influencing what happens in the field. This is the new and promising ground for research, development, and deployment and I expect a substantial evolution of the field as the AI/ML increasingly more gets deployed in every facet of our life.

All machine learning algorithms, such as neural networks, require powerful computing resources to perform efficiently. They may eventually grow and become more complex over time when modifications are made. As a result, it presents a computation challenge. And the demand for better hardware resources is increasing significantly. Can you talk about the new innovations meant to bring down the hardware structure complexities?

With the end of Moore’s law, the technology scaling has stopped and the only way forward is either in coming up with a new technology (neuromorphic, quantum, etc.) which is non-trivial or in specialization using existing technologies.

IEEE has an initiative called Rebooting Computing that explores new approaches to computing. There is a traditional conference where researchers and technologists from academia, government, and industry publish results on innovation with new technologies. But there is also a road mapping activity, IRDS (International Roadmap on Devices and Systems), that attempts to predict how and when new technologies will evolve from the underlying technological processes (physics and chemistry), through devices, systems architecture and applications/use cases.

This is extremely fertile research and development area, as AI/ML requires faster, cheaper (power- and monetary-wise) accelerators. However, these inventions are likely to help even beyond today’s algorithms and applications and hopefully help us solve even harder problems, such as NP-hard problems which are critical in other areas of our life, such as supply chain management, scheduling problems, solving DNA sequencing, and many others.

It has been over six months since the General Data Protection Regulation (GDPR) came into effect across the EU, forcing organizations to get their data protection practices in order. There has been a huge impact on businesses everywhere. For example, in the banking sector, all global banks are now required to store all its customer data locally. With the new regulations on how data is collected, stored and moved, all global companies are now readjusting their strategies on their AI/ML platforms both from a software and infrastructure point of view. What is your take on this?

GDPR was a wake-up call for the industry and for the benefit of individuals, protecting their privacy right. I have witnessed tremendous rush to address it across the industry and many other organizations. And it appears that this is only a beginning as other regions will follow European lead, such as California and perhaps others. I think that this is good for us as individuals but also it is good for providers to put things in order for their customers.

However, the technical challenges will continue to increase with the introduction of IoT and the sea of sensors. Who will own this data and how to protect it? Including the data coming out of our own body. Luckily the data is localized and primarily accessible within the immediate vicinity, but as the insight start being derived, and flown upstream into the Cloud some regulations will have to be introduced to protect the privacy of this data as well.

Finally, almost everything in the digital world is connected to the cloud in some way or another and there is a popular belief the cloud is already a business-as-normal activity. What is going to be the future of cloud computing?

Cloud continues to evolve behind the scenes even if we as users do not notice it. For example, delivering public Cloud has recently received more competition from the traditional delivery models of hosted and managed Clouds. Amazon is starting to put up instances of its Cloud at customer sites, Azure was advocating this approach for some time and many traditional enterprises have been supporting private data centers for a while including hybrid Cloud that transpires the two (public and private).

What many people do not realize is the amount of non-computing and non-data effort that is required to run these big data center instances, such as power, cooling, and reliability support. With data centers growing in size and the need to distribute load for scaling and reliability reasons (disaster recovery) the challenges are ever increasing. AI/ML can further help in this regard in better understanding how failures happen, about the reliability of components, traffic patterns, user behavior, and many others.

However, I also find interesting the dichotomy of the computation and data between the edge and the Cloud. A lot more data is nowadays flowing from IoT through edge computers up to the Cloud than there is data generated from humans. Some of it is reduced by doing the analytics at the edge, but there is still a large amount and it is only increasing. For example, industrial IoT is gaining a lot of traction lately. With the introduction of assisted and/or autonomous vehicles, the amount of data between vehicles and roads, municipalities, factories, etc. will also grow. These are only two of many other applications that will continue to drive the demand for Cloud computing.