WSO2 Workshop on " EMERGING TECHNOLOGY TRENDS"
EMERGING TECHNOLOGY TRENDS
In the age of disruption, businesses and
their leaders will rise or fall based on their ability to spot and
creatively respond to rapid technological change. Some companies notice
an emerging technology and take a “wait and see” attitude. Others see a
new technology and take action. They begin experimenting, making small
bets, and learning.
Their attitude is that it’s never too early to start. It’s never too
early to begin looking at what others are already doing. It’s never too
early to engage the imagination to conceive of how the new technology
could be used to create competitive advantage.
These "fast movers" often jump start creative applications by asking themselves leading questions such as:
Where is this technology likely to be in five years?
When will it become mainstream?
How might it help us differentiate, and to add value to customers?
To improve speed of satisfaction, manage choice and complexity, and
enhance customer experience?
How will/could this new technology help us gain productivity and become a better place to work?
With such questions in mind, what follows technologies that
are ripe for exploitation by people.
CURRENT TRENDS
API
In computer programming, an application programming interface (API) is a set of subroutine definitions, protocols, and tools for building software. In general terms, it is a set of clearly defined methods of communication between various components. A good API makes it easier to develop a computer program by providing all the building blocks, which are then put together by the programmer.
An API may be for a web-based system, operating system, database system, computer hardware, or software library.
An API specification can take many forms, but often includes specifications for routines, data structures, object classes, variables, or remote calls. POSIX, Windows API and ASPI are examples of different forms of APIs. Documentation for the API is usually provided to facilitate usage and implementation.
AI
Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science AI research is defined as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term "artificial intelligence" is applied when a
machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving"
The scope of AI is disputed: as machines become increasingly
capable, tasks considered as requiring "intelligence" are often removed
from the definition, a phenomenon known as the AI effect, leading to the quip, "AI is whatever hasn't been done yet."[3] For instance, optical character recognition is frequently excluded from "artificial intelligence", having become a routine technology.Capabilities generally classified as AI as of 2017 include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go), autonomous cars, intelligent routing in content delivery network and military simulations.
Artificial intelligence was founded as an academic discipline in
1956, and in the years since has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success and renewed funding. For most of its history, AI research has been divided into subfields that often fail to communicate with each other. These sub-fields are based on technical considerations, such as particular goals (e.g. "robotics" or "machine learning"), the use of particular tools ("logic" or artificial neural networks), or deep philosophical differences. Subfields have also been based on social factors (particular institutions or the work of particular researchers).
The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.General intelligence is among the field's long-term goals. Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy and many others.
The field was founded on the claim that human intelligence "can be so precisely described that a machine can be made to simulate it". This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth, fiction and philosophy since antiquity. Some people also consider AI to be a danger to humanity if it progresses unabatedly. Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.
In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.
IOT
The Internet of Things (IoT) is the network of physical devices, vehicles, home appliances, and other items embedded with electronics, software, sensors, actuators, and connectivity which enables these things to connect and exchange data, creating opportunities for more direct integration of the physical
world into computer-based systems, resulting in efficiency improvements,
economic benefits, and reduced human exertions.
The number of IoT devices increased 31% year-over-year to 8.4 billion in 2017 and it is estimated that there will be 30 billion devices by 2020. The global market value of IoT is projected to reach $7.1 trillion by 2020.
IoT involves extending internet connectivity beyond standard
devices, such as desktops, laptops, smartphones and tablets, to any
range of traditionally dumb or non-internet-enabled physical
devices and everyday objects. Embedded with technology, these devices
can communicate and interact over the internet, and they can be remotely
monitored and controlled.
BOTS
An Internet Bot, also known as web robot, WWW robot or simply -bot-, is a software application that runs automated tasks (scripts) over the Internet.Typically, bots perform tasks that are both simple and structurally
repetitive, at a much higher rate than would be possible for a human
alone. The largest use of bots is in web spidering (web crawler),
in which an automated script fetches, analyzes and files information
from web servers at many times the speed of a human. More than half of
all web traffic is made up of bots.
Efforts by servers hosting websites to counteract bots vary.
Servers may choose to outline rules on the behaviour of internet bots by
implementing a robots.txt
file: this file is simply text stating the rules governing a bot's
behaviour on that server. Any bot interacting with (or 'spidering') any
server that does not follow these rules should, in theory, be denied
access to, or removed from, the affected website. If the only rule
implementation by a server is a posted text file with no associated
program/software/app, then adhering to those rules is entirely voluntary
– in reality there is no way to enforce those rules, or even to ensure
that a bot's creator or implementer acknowledges, or even reads, the
robots.txt file contents. Some bots are "good" – e.g. search engine
spiders – while others can be used to launch malicious and harsh
attacks, most notably, in political campaigns.
EMERGING TRENDS
BLOCKCHAIN
A blockchain, originally block chain, is a continuously growing list of records, called blocks, which are linked and secured using cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data (generally represented as a merkle tree root hash). By design, a blockchain is resistant to modification of the data. It is "an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way". For use as a distributed ledger, a blockchain is typically managed by a peer-to-peer network collectively adhering to a protocol
for inter-node communication and validating new blocks. Once recorded,
the data in any given block cannot be altered retroactively without
alteration of all subsequent blocks, which requires consensus of the
network majority.
Blockchains are secure by design and exemplify a distributed computing system with high Byzantine fault tolerance. Decentralized consensus has therefore been achieved with a blockchain.
Blockchain was invented by Satoshi Nakamoto in 2008 to serve as the public transaction ledger of the cryptocurrency bitcoin. The invention of the blockchain for bitcoin made it the first digital currency to solve the double-spending problem without the need of a trusted authority or central server. The bitcoin design has inspired other applications.
SHARING ECONOMY
Sharing economy is an umbrella term with a range of meanings, often used to describe economic activity involving online transactions. Originally growing out of the open-source community to refer to peer-to-peer based sharing of access to goods and services, the term is now sometimes used in a broader sense to describe any sales transactions that are done via online market places, even ones that are business to business (B2B), rather than peer-to-peer. For this reason, the term sharing economy
has been criticised as misleading, some arguing that even services that
enable peer-to-peer exchange can be primarily profit-driven. However, many commentators assert that the term is still valid as a
means of describing a generally more democratized marketplace, even when
it's applied to a broader spectrum of services. Alternatively,
collaborative consumption or the sharing economy refers rather to
resource circulation systems which allow a consumer two-sided role, in
which consumers may act as both providers of resources or obtainers of
resources.
This vision allows for a broader understanding of the sharing economy
on the overarching criteria of consumer changing role capacity.
Shared Economy is also known as collaborative consumption or
collaborative economy or peer economy. It refers to a hybrid market
model of a peer-to-peer exchange. Such transactions are often facilitated via community-based online services. Uberization is also an alternative name for the phenomenon.
The sharing economy may take a variety of forms, including using information technology
to provide individuals with information that enables the optimization
of resources through the mutualization of excess capacity in goods and
services. A common premise is that when information about goods is shared (typically via an online marketplace), the value of those goods may increase for the business, for individuals, for the community and for society in general.
Collaborative consumption as a phenomenon is a class of economic
arrangements in which participants mutualize access to products or
services, rather than having individual ownership.The phenomenon stems from an increasing consumer desire to be in
control of their consumption instead of "passive 'victims' of
hyperconsumption".
The consumer peer-to-peer rental market is valued at $26bn (£15bn), with new services and platforms emerging frequently.
The collaborative consumption model is used in online marketplaces such as eBay as well as emerging sectors such as social lending, peer-to-peer accommodation, peer-to-peer travel experiences, peer-to-peer task assignments or travel advising, carsharing or commute-bus sharing.
The Harvard Business Review, the Financial Times
and many others have argued that "sharing economy" is a misnomer.
Harvard Business Review suggested the correct word for the sharing
economy in the broad sense of the term is "access economy".
The authors say, "When "sharing" is market-mediated—when a company is
an intermediary between consumers who don't know each other—it is no
longer sharing at all. Rather, consumers are paying to access someone
else's goods or services."
AR/VR
One of the biggest confusions in the world of augmented reality
is the difference between augmented reality and virtual reality. Both
are earning a lot of media attention and are promising tremendous
growth. So what is the difference between virtual reality vs.
augmented reality?
What is Virtual Reality?
Virtual reality
(VR) is an artificial, computer-generated simulation or recreation of a
real life environment or situation. It immerses the user by making them
feel like they are experiencing the simulated reality firsthand,
primarily by stimulating their vision and hearing.
VR is typically achieved by wearing a headset like Facebook’s Oculus equipped with the technology, and is used prominently in two different ways:
To create and enhance an imaginary reality for gaming,
entertainment, and play (Such as video and computer games, or 3D movies,
head mounted display).
To enhance training for real life environments by creating a
simulation of reality where people can practice beforehand (Such as
flight simulators for pilots).
Virtual reality is possible through a coding language known as VRML
(Virtual Reality Modeling Language) which can be used to create a series
of images, and specify what types of interactions are possible for
them.
What is Augmented Reality?
Augmented reality
(AR) is a technology that layers computer-generated enhancements atop
an existing reality in order to make it more meaningful through the
ability to interact with it. AR is developed into apps and used on
mobile devices to blend digital components into the real world in such a
way that they enhance one another, but can also be told apart easily.
AR technology is quickly coming into the mainstream. It is used to
display score overlays on telecasted sports games and pop out 3D emails,
photos or text messages on mobile devices. Leaders of the tech industry
are also using AR to do amazing and revolutionary things with holograms
and motion activated commands.
Quantum computing
Quantum computing is computing using quantum-mechanical phenomena, such as superposition and entanglement. A quantum computer is a device that performs quantum computing. They are different from binary digital electronic computers based on transistors. Whereas common digital computing requires that the data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses quantum bits, which can be in superpositions of states. A quantum Turing machine
is a theoretical model of such a computer, and is also known as the
universal quantum computer. The field of quantum computing was
initiated by the work of Paul Benioff and Yuri Manin in 1980, Richard Feynman in 1982, and David Deutsch in 1985.
As of 2018, the development of actual quantum computers is still
in its infancy, but experiments have been carried out in which quantum
computational operations were executed on a very small number of quantum
bits.
Both practical and theoretical research continues, and many national
governments and military agencies are funding quantum computing research
in additional effort to develop quantum computers for civilian, business, trade, environmental and national security purposes, such as cryptanalysis. A small 20-qubit quantum computer exists and is available for experiments via the IBM quantum experience project. D-Wave Sy
Large-scale quantum computers would theoretically be able to
solve certain problems much more quickly than any classical computers
that use even the best currently known algorithms, like integer factorization using Shor's algorithm (which is a quantum algorithm) and the simulation of quantum many-body systems. There exist quantum algorithms, such as Simon's algorithm, that run faster than any possible probabilistic classical algorithm. A classical computer could in principle (with exponential resources) simulate a quantum algorithm, as quantum computation does not violate the Church–Turing thesis. On the other hand, quantum computers may be able to efficiently solve problems which are not practically feasible on classical computers.
3D PRINTING
3D printing is any of various processes in which material is joined or solidified under computer control to create a three-dimensional object,
with material being added together (such as liquid molecules or powder
grains being fused together). 3D printing is used in both rapid prototyping and additive manufacturing (AM). Objects can be of almost any shape or geometry and typically are produced using digital model data from a 3D model or another electronic data source such as an Additive Manufacturing File (AMF) file (usually in sequential layers). There are many different technologies, like stereolithography (SLA) or fused deposit modeling
(FDM). Thus, unlike material removed from a stock in the conventional
machining process, 3D printing or AM builds a three-dimensional object
from computer-aided design (CAD) model or AMF file, usually by
successively adding material layer by layer.
The term "3D printing" originally referred to a process that deposits a binder material onto a powder bed with inkjet printer
heads layer by layer. More recently, the term is being used in popular
vernacular to encompass a wider variety of additive manufacturing
techniques. United States and global technical standards use the official term additive manufacturing for this broader sense.
Great job 👌
ReplyDeleteGood job 👍
ReplyDeleteNice bro👌
ReplyDeleteGreat job
ReplyDeleteSuperb
ReplyDeleteNice work bro...
ReplyDeletenice work bro
ReplyDeleteGOod
ReplyDeleteGreat work..
ReplyDeleteNice one bro....
ReplyDelete