Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

AI, Blockchain, IoT Convergence Insights from Patents

1,278 views

Published on

Contents

I. AI, Blockchain, IoT Technology Innovation Status
1. Technology Innovation Status in Innovation Entity
2. Technology Innovation Evolution
3. Technology Innovation Status in Innovation Country
4. Technology Innovation Status in CPC Classification
5. Technology Innovation Status in Specific Technology
A. Deep RL Technology Innovation Status
B. Deep Learning for Autonomous Vehicle Technology Innovation Status
C. Deep Learning for 5G Technology Innovation Status
D. Deep Learning for Cybersecurity Technology Innovation Status
E. Blockchain Privacy Technology Innovation Status
F. Blockchain Interoperability Technology Innovation Status
G. Blockchain DID Technology Innovation Status
H. Blockchain Toknization Technology Innovation Status

II. AI Blockchain IoT Convergence Technology Innovation Status
1. Technology Innovation Status in Innovation Entity
2. Convergence Technology Innovation Status in Convergence Field
3. Convergence Technology Innovation Evolution
4. Technology Innovation Status in Innovation Country
5. Technology Innovation Status in CPC Classification
6. Technology Innovation Status in Specific Application
Privacy-preserving Blockchain-based AI Data/Model Marketplace
Blockchain-based Decentralized Machine Learning Platform
Blockchain-based Secure Telehealth-diagnostic System
Peer-to-Peer Micro-Loan Transaction System
Predictive Maintenance Platform for Industrial Machine using Industrial IoT
Blockchain Augmented IoT System for Dynamic Supply Chain Tracking
Decentralized Energy Management Utilizing Blockchain Technology
Provisioning Edge Devices in Mobile Carrier Network as Compute Nodes in Blockchain Network
Distributed Handoff-related Processing for Wireless Networks

III. Appendix
1. AI Blockchain IoT Convergence AT A Glance
2. AI, Blockchain, IoT for Finance AT A Glance
3. AI, Blockchain, IoT for Healthcare AT A Glance
4. 5G Based AI + Blockchain + IoT Convergence AT A Glance

Published in: Technology
  • Be the first to comment

AI, Blockchain, IoT Convergence Insights from Patents

  1. 1. 1    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    AI, Blockchain, IoT Convergence Insights from Patents1 Alex G. Lee2 Executive Summary3 Patents are a good information resource for obtaining the state of the art of technology innovation insights. Patents that specifically describe the major technology filed of a specific technology innovation are a good indicator of the technology innovation status in a specific innovation entity. To find AI, Blockchain, IoT technology innovation status, patent applications in the USPTO, EPO, IPO during the period of January 1, 2010 - September 30, 2020 in priority date that specifically describe the major AI, Blockchain, IoT technologies are searched and reviewed. 48,000, 17,000, 32,000 published patent applications that are related to the key AI, Blockchain, IoT technology innovation respectively are selected for detail analysis. Top 100 AI, Blockchain, IoT technology innovation entities are selected based on their number of published patent applications. The top 100 AI technology innovation entities represent 23,593 patent applications. The top 10 AI innovation leaders are IBM, Google, Microsoft, Samsung Electronics, Intel, Siemens, Facebook, Philips, GE, and Accenture. The top 100 Blockchain technology innovation entities represent 7,809 patent applications. The top 10 Blockchain innovation leaders are Alibaba Group, IBM, nChain, Mastercard, Walmart, Bank of America, Visa, Microsoft, Intel, and Accenture. The top 100 IoT technology innovation entities represent 18,786 patent applications. The top 10 IoT innovation leaders are Qualcomm, Ericsson, LG Electronics, Samsung Electronics, Intel, Ford, IBM, Huawei, GM, and Toyota. A patent counting for growth in patenting over a period of times can be a good measuring tool for monitoring the evolution of technology innovation. AI, Blockchain, IoT patenting activities with respect to the technology innovation date (priority year) are analyzed.                                                              1 This research was supported in part by the WEF Global Partnership Program for the Fourth Industrial Revolution through KAIST KPC4IR funded by the Ministry of Science and ICT of S. Korea. 2 Alex G. Lee, Ph.D/Patent Attorney, is a principal consultant at TechIPm, LLC. 3 Final Report-11/26/2020
  2. 2. 2    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    The AI patent application activity chart indicates that the AI technology innovation activity started in a rapid growth stage from 2016. Since there is usually a time lag between the initial application date (priority year) and the publication date by around two years, the AI patent application activity chart indicates that the AI technology innovation activity is still in a growth stage. The Blockchain patent application activity chart indicates that the Blockchain technology innovation activity started in a rapid growth stage from 2016. The Blockchain patent application activity chart indicates that the Blockchain technology innovation activity is still in a growth stage. The IoT patent application activity chart indicates that the IoT technology innovation activity started in a rapid growth stage already before 2010. The IoT patent application activity chart indicates that the IoT technology innovation activity became matured in 2018. A patent counting for can be a good measuring tool for technology innovation status of a specific country. Landscape of the AI, Blockchain, IoT technology innovation entities with respect to country of H.Q. location are presented. Top 10 AI technology innovation countries are USA, Japan, S. Korea, China, Germany, UK, Canada, Ireland, Netherlands, and India. Top 10 Blockchain technology innovation countries are USA, China, Japan, UK, Germany, Canada, Antigua and Barbuda, S. Korea, Switzerland, and Ireland. Top 10 IoT technology innovation countries are USA, S. Korea, Japan, Sweden, China, Germany, France, UK, Israel, and Canada. Patents are a good information resource for obtaining the state of the art of technology innovation insights in a specific technology filed. Technology innovation status in deep reinforcement learning, deep learning for autonomous vehicle, deep learning applications for 5G, deep learning applications for cybersecurity, blockchain privacy, blockchain interoperability, blockchain decentralized identifier, and blockchain tokenization are presented. To find AI, Blockchain, IoT convergence technology innovation status, patent applications in the USPTO, EPO, IPO during the period of January 1, 2010 - September 30, 2020 in priority date that specifically describe the major AI, Blockchain, IoT convergence technologies are searched and reviewed. 1,288 published patent applications that are related to the key AI, Blockchain, IoT convergence technology innovation respectively are selected for detail analysis. Top 100 AI,
  3. 3. 3    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    Blockchain, IoT convergence technology innovation entities are selected based on their number of published patent applications. The key AI, Blockchain, IoT convergence innovation entities are IBM, Strong Force IP, Intel, Accenture , Microsoft , Bank of America , Bao Tran, Capital One Services, LG Electronics, Cisco, Ericsson, Samsung Electronics, HP, nChain. Nokia, Inmentis, LLC, Salesforce, Tata Consultancy Services, Siemens, and T-Mobile. AI+IoT convergence is the most innovated AI, Blockchain, IoT convergence technology followed by AI+Blockchain, Blockchain+IoT, and AI+Blockchain+IoT. Patent application activity chart indicates that the AI, Blockchain, IoT convergence technology innovation activity is in a rapid growth stage. Top 10 AI technology innovation countries are USA, India, S. Korea, Germany, Canada, UK, Ireland, Japan, Singapore, and Israel. The major AI, Blockchain, IoT convergence patent applications with respect to the field of applications are Digital Infrastructure, Cybersecurity/Fraud Detection, Consumer Electronics, Banking/Financial Transaction, Entertainment/Game, Automotive/Transportation, Industrial/Manufacturing, Supply Chain Management, Environment, Government/Public Service, Healthcare, Retail, Agriculture, Augmented Reality, Business Intelligence, International Trade, Office Equipment, Real Estate Marketplace, and Telecommunications. Patent information can provide many valuable insights that can be exploited for developing and implementing new technologies. Patents can also be exploited to identify new product/service development opportunities. Patents regarding various AI, Blockchain, IoT convergence technology implementations/products/services are presented: Privacy-preserving Blockchain- based AI Data/Model Marketplace; Blockchain-based Decentralized Machine Learning Platform; Blockchain-based Secure Telehealth-diagnostic System; Peer-to-Peer Micro-Loan Transaction System; Predictive Maintenance Platform for Industrial Machine using Industrial IoT; Autonomous Vehicle V2V Communication Management to Facilitate Operational Safety; Blockchain Augmented IoT System for Dynamic Supply Chain Tracking; Decentralized Energy Management Utilizing Blockchain Technology; Provisioning Edge Devices in Mobile Carrier Network as Compute Nodes in Blockchain Network; Distributed Handoff-related Processing for Wireless Networks.
  4. 4. 4    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    I. AI, Blockchain, IoT Technology Innovation Status 1. Technology Innovation Status in Innovation Entity Patents are a good information resource for obtaining the state of the art of technology innovation insights. Patents that specifically describe the major technology filed of a specific technology innovation are a good indicator of the technology innovation status in a specific innovation entity. To find AI, Blockchain, IoT technology innovation status, patent applications in the USPTO, EPO, IPO during the period of January 1, 2010 - September 30, 2020 in priority date that specifically describe the major AI, Blockchain, IoT technologies are searched and reviewed. 48,000, 17,000, 32,000 published patent applications that are related to the key AI, Blockchain, IoT technology innovation respectively are selected for detail analysis. Top 100 AI, Blockchain, IoT technology innovation entities are selected based on their number of published patent applications. A. AI Following figure shows the landscape of the top 100 AI technology innovation entities with respect to number of published patent applications. The top 100 AI technology innovation entities represent 23,593 patent applications. The top 10 AI innovation leaders (IBM, Google, Microsoft, Samsung Electronics, Intel, Siemens, Facebook, Philips, GE, Accenture) account for 11,217 patent applications. The size of bubble chat for IBM represents 2,542 patent applications.
  5. 5. 5    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    B. Blockchain Following figure shows the landscape of the top 100 Blockchain technology innovation entities with respect to number of published patent applications. The top 100 Blockchain technology innovation entities represent 7,809 patent applications. The top 10 Blockchain innovation leaders (Alibaba Group, IBM, nChain, Mastercard, Walmart, Bank of America, Visa, Microsoft, Intel Accenture) account for 3,791 patent applications. The size of bubble chat for Alibaba Group represents 1,182 patent applications.
  6. 6. 6    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    C. IoT Following figure shows the landscape of the top 100 IoT technology innovation entities with respect to number of published patent applications. The top 100 IoT technology innovation entities represent 18,786 patent applications. The top 10 IoT innovation leaders (Qualcomm, Ericsson, LG Electronics, Samsung Electronics, Intel, Ford, IBM, Huawei, GM, Toyota) account for 9,326 patent applications. The size of bubble chat for Qualcomm represents 1,851 patent applications.
  7. 7. 7    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    2. Technology Innovation Evolution A patent counting for growth in patenting over a period of times can be a good measuring tool for monitoring the evolution of technology innovation. AI, Blockchain, IoT patenting activities with respect to the technology innovation date (priority year) are analyzed. A. AI Following patent application activity chart shows the AI technology innovation growth trends. The patent application activity chart indicates that the AI technology innovation activity started
  8. 8. 8    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    in a rapid growth stage from 2016. Since there is usually a time lag between the initial application date (priority year) and the publication date by around two years, the patent application activity chart indicates that the AI technology innovation activity is still in a growth stage. B. Blockchain Following patent application activity chart shows the Blockchain technology innovation growth trends. The patent application activity chart indicates that the Blockchain technology innovation activity started in a rapid growth stage from 2016. Since there is usually a time lag between the initial application date (priority year) and the publication date by around two years, the patent application activity chart indicates that the Blockchain technology innovation activity is still in a growth stage. C. IoT
  9. 9. 9    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    Following patent application activity chart shows the IoT technology innovation growth trends. The patent application activity chart indicates that the IoT technology innovation activity started in a rapid growth stage already before 2010. Since there is usually a time lag between the initial application date (priority year) and the publication date by around two years, the patent application activity chart indicates that the IoT technology innovation activity became matured in 2018. 3. Technology Innovation Status in Innovation Country A patent counting for can be a good measuring tool for technology innovation status of a specific country. Following figures show the landscape of the AI, Blockchain, IoT technology innovation entities with respect to country of H.Q. location respectively. A. AI As can be seen in the following figure, top 10 AI technology innovation countries are USA, Japan, S. Korea, China, Germany, UK, Canada, Ireland, Netherlands, and India.
  10. 10. 10    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    B. Blockchain As can be seen in the following figure, top 10 Blockchain technology innovation countries are USA, China, Japan, UK, Germany, Canada, Antigua and Barbuda, S. Korea, Switzerland, and Ireland. C. IoT As can be seen in the following figure, top 10 IoT technology innovation countries are USA, S. Korea, Japan, Sweden, China, Germany, France, UK, Israel, and Canada.
  11. 11. 11    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    4. Technology Innovation Status in CPC Classification The Cooperative Patent Classification (CPC) is a patent classification system, which has been jointly developed by the EPO and the USPTO. Each CPC classification4 term consists of a symbol such as "A01B33/00". The first letter is the "section symbol" consisting of a letter from A: Human Necessities, B: Operations and Transport, C: Chemistry and Metallurgy, D: Textiles, E: Fixed Constructions, F: Mechanical Engineering, G: Physics, H: Electricity, Y: Emerging Cross-Sectional Technologies. This is followed by a two-digit number to give a "class symbol" ("A01" represents "Agriculture; forestry; animal husbandry; trapping; fishing"). The final letter makes up the "subclass" (A01B represents "Soil working in agriculture or forestry, parts, details, or accessories of agricultural machines or implements, in general"). The subclass is then followed by a 1- to 3-digit "group" number, an oblique stroke and a number of at least two digits representing a "main group" ("00") or "subgroup". Following figures show the landscape of the AI, Blockchain, IoT technology innovation status with respect to CPC classification respectively.                                                              4 https://www.uspto.gov/web/patents/classification/
  12. 12. 12    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    A. AI As can be seen in the following figure, top 20 AI technology innovation CPC classifications are G06N-0020/00 (Machine learning) G06N-0003/08 (AI for automation) G06N-0003/0454 (AI using combination of multiple neural networks) G06T-0007/0012 (Image analysis using segmentation) G06N-0003/063 (Computer systems based on biological models using electronic means) G16H-0050/20 (AI for medical diagnosis) G06N-0003/084 (Back-propagation for deep learning) G06N-0003/04 (Artificial life, i.e. computers simulating lif) G06N-0005/04 (Inference methods using knowledge-based models) G06F-0040/30 (NLP for semantic analysis) G06N-0003/0445 (Feedback networks for AI) G06K-0009/6256 (Recognizing patterns by training AI system) H04L-0063/1425 (AI for network anomaly detection) G10L-0015/22 (AI method for a speech recognition process) G06F-0016/9535 (AI for search customization based on user profiles and personalization) H04L-0063/1416 (AI for network attack signature detection) G06N-0007/005 (Probabilistic networks based on specific mathematical models) G06K-0009/6262 (Active pattern learning technique) G06K-0009/66 (Pattern recognition using adaptive learning) G16H-0010/60 (AI for processing electronic patient records)
  13. 13. 13    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    B. Blockchain As can be seen in the following figure, top 20 Blockchain technology innovation CPC classifications are H04L-0009/3239 (Cryptography involving non-keyed hash functions) G06Q-0040/04 (Data processing for cryptocurrency exchange) G06Q-0020/065 (Cryptocurrency payment architectures, schemes or protocols) H04L-0009/0637 (Cryptography for blockchain) G06F-0021/64 (Protecting data integrity, e.g. using checksums, certificates or signatures) H04L-0009/3247 (Cryptography involving digital signatures) G06Q-0020/3829 (Payment protocols involving key management) G06Q-0020/401 (Payment transaction verification) H04L-0009/3236 (Cryptography using hash functions) G06Q-0020/02 (Payment protocols involving certification authority) G06F-0016/2379 (Database updates)
  14. 14. 14    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    G06Q-0020/389 (Keeping log of payment transactions for guaranteeing non-repudiation of a transaction) G06F-0021/602 (Providing cryptographic facilities or services for data protection) G06F-0021/6245 (Protecting personal data, e.g. for financial or medical purposes) G06F-0016/2365 (Process for ensuring data consistency and integrity) G06F-0021/6218 (Distributed database system) G06Q-0020/10 (Payment transaction for electronic funds transfer) G06Q-0020/0658 (Cryptocurrency payment management) G06Q-0040/025 (Financial credit processing or loan processing) G06F-0021/10 (Security arrangements for protecting programs or data) C. IoT As can be seen in the following figure, top 20 IoT technology innovation CPC classifications are H04L-0067/12 (Network arrangements or communication protocols for IoT applications) H04W-0004/40 (Vehicular wireless communications) G05B-0015/02 (Electric systems controlled by a computer)
  15. 15. 15    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    H04W-0004/70 (Services for M2M or MTC) H04W-0012/06 (Authentication, e.g. verifying user identity or authorisation) G07C-0005/008 (communicating information to a remotely located station for vehicles) H04W-0048/18 (Selecting a wireless network or a communication service) H04W-0072/02 (Selection of wireless resources by user or terminal) H04W-0072/042 (Downlink wireless resource allocation) H04L-0067/125 (Network arrangements or communication protocols involving the control of IoT device applications) H04W-0074/0833 (Wireless channel access using a random access procedure) H04W-0048/16 (Discovering, processing wireless access restriction) G16H-0010/60 (Processing data for electronic patient records) G08C-0017/02 (Arrangements for transmitting signals using a radio link) H04L-0005/0053 (Allocation of signaling for multiful transmission) G06Q-0040/08 (Insurance/Finance risk analysis) H04W-0004/80 (Services using short range wireless communications) H04L-0012/282 (Home networking based on user interaction) H04W-0024/10 (Scheduling measurement reports for wireless communications) H04W-0072/0446 (Allocation of wireless resources, the resource being a slot, sub-slot or frame)
  16. 16. 16    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    5. Technology Innovation Status in Specific Technology Patents are a good information resource for obtaining the state of the art of technology innovation insights in a specific technology filed. A. Deep RL Technology Innovation Status Patents that specifically describe the major deep reinforcement learning (Deep RL) technologies are a good indicator of the Deep RL innovations in a specific innovation entity. To find the deep learning technology innovation status of Deep RL, 260 published patent applications in the USPTO that are related to the key Deep RL technology innovation are selected for detail analysis. Following figure shows the Deep RL patent application landscape with respect to the innovation entity. As shown in the figure, Google/Deepmind is the leader in Deep RL technology innovation followed by IBM, Fanuc Corp., Siemens Healthcare, GM, Huawei, Honda, Baidu, Intel, Royal Bank of Canada, LG Electronics, and Microsoft.
  17. 17. 17    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    Following figure shows the Deep RL patent application landscape with respect to the key technology innovation field. As shown in the figure, Deep Q-Learning/SARSA/DRQN is the most innovated Deep RL followed by Neural Network Training Technique/Procedure, Actor Critic System/A2C/ A3C/DDPG, Policy Gradient Algorithm/ REINFORCE/PPO/TRPO , Action Selection System/Policy, Multi-agent Algorithm, Imitation Learning, Exploration, Hierarchical RL, Inverse RL Algorithm, Model-based RL, Multi-armed Bandit Algorithm, Partially Observable MDP, Distributed Training, MCTS, RL Processor Architecture, Adaptive Tree- Search, Algorithm, DCTD Algorithm, Dynamic Agent Grouping, MDP, Memory-enhanced RL, Reward System, and Task Partitioning.
  18. 18. 18    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    Patent information can provide many valuable insights that can be exploited for developing and implementing new technologies. Patents can also be exploited to identify new product/service development opportunities. FinTech/Deep Q-Network (DQN) Learning US20190392314 (Alibaba Group) illustrates a Deep RL application in FinTech. This is an example application of DQN Learning regarding the cash return fraud recognition for consumer credit products. "Cash return" refers to using a financial transaction to obtain cash or other benefits. For example, a buyer purchases a product with a credit card and then returns the purchased product for a cash refund, where the purchase transaction can be a fake transaction. For another example, a buyer and a merchant may perform fake transactions to get bonus points from payment platforms or credit card companies. As shown in Figure, the fraud recognition system 100 includes the first DQN 11, a policy unit 12, a sample preparation unit 13, a sample queue 14, the second DQN 15, and an evaluation unit 16. The first DQN 11 recognizes a cash-return transaction, and the second DQN
  19. 19. 19    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    15 trains model. In a DQN, a neural network is used to non-linearly approximate a Q-value function. A value of the Q-value function is referred to as a Q-value. The DQN outputs a two- dimensional Q-value vector according to an input feature vector of transaction information, the two Q-values in the Q-value vector correspond to two actions respectively: the transaction being a cash-return transaction and the transaction being a non-cash-return transaction. Transaction amounts can be obtained after the recognition system 100 selects an action, and used as returns for the recognition system 100. Deep RL system training procedures are as follows: (1) The first transaction information s1 and corresponding cash-return label value b1 are randomly acquired from a batch of transaction data concerning payment using a historical database which stores data of transactions. The first transaction information s1 can include an attribute feature of the transaction, a buyer's feature, a seller's feature, a logistic feature, etc. Similarly, s2 and b2 can also be used to represent the second transaction. The information s1 and s2 correspond to states in the DQN 11 and DQN 15, and the cash-return label values b1 and b2 correspond to label values of actions in the DQNs. The batch of transaction data can include data of hundreds or thousands of transactions. (2) The information s1 and s2 are input to the first DQN 11 in sequence to output corresponding two-dimensional Q-value vectors q(s1) and q(s2) respectively. The vector q(s1) can be a two-
  20. 20. 20    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    dimensional vector that includes two Q-value's predictive values, one corresponding to a cash- return action, and the other corresponding to a non-cash-return action. The vector q(s2) can also be a two-dimensional vector that includes two Q-value's predictive values (q1, q2). The value q1 corresponds to an action that the transaction is a cash-return transaction, and q2 corresponds to an action that the transaction is a non-cash-return transaction. The Q-value vector q(s1) for the first transaction is sent to the policy unit 12, and the Q-value vector q(s2) for the second transaction is sent to the sample preparation unit 13. (3) The policy unit 12 selects an action according to a predetermined policy. For example, the predetermined policy can be the ε-greedy policy. An action is selected from a set containing all available actions according to the ε-greedy policy. The policy unit 12 acquires a cash-return predictive value a1, which predicts whether s1 is a cash-return transaction, for the first transaction from the Q-value vector q(s1) according to the selected policy , and transfers the cash-return predictive value a1 to the sample preparation unit 13. (4) The sample preparation unit 13 determines the return value r for the action a1 for the first transaction based on a1, b2, and the transaction amount. The sample preparation unit 13 obtains a Q-value's predictive value q(s2, b2) from the Q-value vector q(s2) for the second transaction based on the cash-return label value b2 of the second transaction, and use the Q-value's predictive value q(s2, b2) as the maximum of the Q-values contained in the Q-value vector q(s2). The sample preparation unit 13 compares the cash-return predictive value a1 for the first transaction to the cash-return label value b1 to determine a return r. After r is determined, the sample preparation unit 13 calculates the Q-value's label value Q(s1, a1) for the first transaction based on the calculated r and q(s2, b2) according to the Deep Q Learning algorithm as expressed in the following pseudo-code. (5) The sample preparation unit 13 sends a1, Q(s1, a1), and s1 to the sample queue 14 as one sample, and sends the return r to the evaluation unit 16. In the DQN 11 and DQN 15, a Q-value corresponding to a pair of a state (transaction information) and an action (a cash-return predictive value or a cash-return label value) represents an accumulated return of the system after the action is performed. The Q learning algorithm is to determine an optimal action- selection policy such that the policy maximizes the expected value of a reward over a number of steps. The Q-value represents the value of the reward.
  21. 21. 21    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    (6) A batch of samples (e.g., 500 samples) can be randomly selected from the sample queue 14 for training the second DQN 15. During the training, corresponding to each sample, parameters of the second DQN 15 can be adjusted through a stochastic gradient descent algorithm by using s1 and a1 as inputs and Q(s1, a1) as an output label value, such that an output q(s1, a1) of the second DQN 15 corresponding to inputs s1 and a1 is more close to the label value Q(s1, a1) than that of the second DQN 15 before the parameters are adjusted. After the training has been performed by using a number of batches (e.g., 100 batches) of samples (each batch including, e.g., 500 samples), the second DQN 15 transfers and assigns its weights to the first DQN 11. In addition, upon receiving the return r for the first transaction, the evaluation unit 16 adds the return r to a total return value of the system to accumulate the total return value, and thus evaluate the system's learning capacity based on the total return value. The total return value increases with the number of iterations in the training, and stabilize around a fixed value after second DQN 15 converges. Google Deepmind’s Innovation for Challenging Deep RL Issues Open questions in the application of RL systems to practical systems are:
  22. 22. 22    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    Sparse Reward One of the critical problems in Deep RL is how to design an appropriate reward function. It is easy to design a sparse reward function, which gives a positive reward such as +1 when the task is accomplished correctly and zero otherwise. In practice, however, rarely occurring sparse reward signal are difficult for neural networks to model, and thus, use of a sparse reward function can make it hard to find an optimal policy. Hierarchical learning, where breaking large tasks into smaller subtasks, may help accelerate learning on individual tasks by mitigating the exploration challenge of sparse reward problems. Safety Training Safe Deep RL tries to ensure reasonable system performance and respect safety constraints during the learning processes. Maintaining safety during training of the RL system is a critical issue because most real-world applications require training solutions that are safe to operate against potential catastrophic failures. Learning on a simulated environment and then transferring the learned knowledge to the real world case can offer a partial solution. Use of human intervention during training in order to make sure the RL system does not go into a catastrophic state can offer another partial solution. Generalization Generalizing between tasks remains is a challenge for Deep RL algorithms. Although trained agents can solve complex tasks, they struggle to transfer their experience to new environments. In traditional Deep RL systems, a generalization is achieved by approximating the optimal value function with a low-dimensional representation using a deep neural network. While this approach works well in some domains, in domains where the optimal value function cannot easily be reduced to a low-dimensional representation, learning can be very slow and unstable. US20200143206 addresses the Safe Deep RL issue using continuous action guidance of look- ahead search with Monte Carlo Tree Search (MCTS) for better exploration combining with Imitation Learning. This innovation differs from AlphaGo-like innovation in multiple aspects: (i) Enabling on-policy model-free RL methods to explore safely in hard-exploration domains where
  23. 23. 23    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    negative rewarding terminal states are ubiquitous, in contrast to off-policy methods which use intensive search to fully learn a policy; (ii) Generalizing in that other demonstrators (human or other sources) can be integrated to provide action guidance by using the auxiliary loss refinement; and (iii) Using the demonstrator with a small look-ahead search to filter out actions leading to immediate negative terminal states so that model-free RL can imitate those safer actions to learn to safely explore. US20200151562 addresses the problem of efficiently training a continuous-control reinforcement learning system using demonstrations, and of the problem training using a sparse reward. The critic neural network of implemented actor-critic system can trained using an estimated n-step return. Thus return data can be derived from a combination of the reward data and a discounted reward from a predicted succession or rollout of n-1 transitions forward from a current state of the environment, where n can be selected between n=1 and n>1 depending upon whether demonstration or operation transitions are selected. This allows to adapt when demonstration trajectories are longer than operation transitions, by allowing a sparse reward to affect more state-action pairs, in particular the values of n preceding state-action pairs. The critic neural network receives state, action, and reward/return data and defines a value function for a TD error signal, which is used to train both actor and critic neural networks. The actor neural network receives the state data and has an action data output defining actions in a continuous action space. The actor-critic system can be implemented in an asynchronous, multi-threaded manner comprising a set of worker agents each with a separate thread, a copy of the environment and experience, and with respective network parameters used to update a global network. B. Deep Learning for Autonomous Vehicle Technology Innovation Status Patents that specifically describe the major deep learning applications in Autonomous Vehicle are a good indicator of the deep learning for Autonomous Vehicle innovations in a specific innovation entity. To find the deep learning for Autonomous Vehicle technology innovation status, 201 published patent applications in the USPTO that are related to the key deep learning for Autonomous Vehicle technology innovation are selected for detail analysis.
  24. 24. 24    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    Following figure shows the deep learning for Autonomous Vehicle patent application landscape with respect to the innovation entity. As shown in the figure, the key deep learning for Autonomous Vehicle innovation entities are GM, Ford, Toyota, Baidu, Honda, Zoox, Inc., Nvidia, Uber, Huawei, Intel, LG Electronics, Metawave Corporation, Micron Technology, NEC Laboratories America, and Samsung Electronics. Following figure shows the deep learning for Autonomous Vehicle patent application landscape with respect to the key technology innovation field. As shown in the figure, Navigation/Operation/Motion Control (End-to-End/Collision-Avoidance/Parking) and Obstacle/Object Detection/Identification are the most innovated deep learning for Autonomous Vehicle technology followed by Behavior/Motion/Trajectory/Path Prediction/Planning, Surrounding Traffic/Vehicle/Road/Lighting Situation/Safety Recognition , Surrounding Scene Recognition , Localization (Odometry), Surrounding Object Behavior/Movement Prediction (Scene Flow), Lane Detection/Identification, Performing Vehicle Maneuvers (Turn/Lane
  25. 25. 25    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    Change), Navigation Map Generation/Update/Correction, and Occupant Emotion/Behavior Recognition. Patent information can provide many valuable insights that can be exploited for developing and implementing new technologies. Patents can also be exploited to identify new product/service development opportunities. Objects Detection in 3D Point Clouds/ US20200019794 3D point cloud of Light Detection And Ranging (LiDAR) sensor can be used for a detailed understanding of the environment surrounding the autonomous vehicle. Due to the computational burden introduced by the third spatial dimension, however, renders a naive transfer of Convolutional Neural Networks (CNNs) from 2D computer vision applications to native 3D perception in point clouds infeasible for large-scale applications. As a result, previous
  26. 26. 26    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    approaches tend to convert the data into a 2D representation first, where spatially adjacent features are not necessarily close to each other in the physical 3D space, requiring models to recover these geometric relationships leading to poorer performance t. Thus, computationally efficient approach to detecting objects in 3D point clouds using CNNs natively in 3D is developed. Following figure shows a flow-chart outlining a method of detecting objects within a 3D environment. A LiDAR sensor that is mounted on a vehicle monitors its locale and generates data on a sensed scene around the vehicle that provides a representation of the 3D-evironment in step 800. Next step is to convert a point-cloud captured by the LiDAR sensor to a discrete 3D grid (step 802), such that for each cell that contains a non-zero number of points, a feature vector is extracted based on the statistics of the points in the cell (step 804). The feature vector holds a binary occupancy value, the mean and variance of the reflectance values and three shape factors. Step 804 performs a sparse convolution across this native 3D representation followed by a ReLU (Rectified Linear Unit) non-linearity using a Convolutional Neural Network (CNN), which returns a new sparse 3D representation.
  27. 27. 27    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    In step 810, as shown in the following figure, each non-zero input feature vector casts a set of votes, weighted by filter weights within units of the CNN 136, to its surrounding cells in the output layer 200, as defined by the receptive field of the filter. This voting/convolution, using the weights, moves the data between layers (401, 402, 404, 200) of the CNNs. The voting output is passed through (step 814) a ReLU (Rectified Linear Unit) nonlinearity which discards non- positive features as described in the next section. In step 818, the predicted confidence scores in the output data layer indicates the confidence score as to whether an object exists within the cells of the n-dimensional grid data input to the network. Lane Change/US20200139973 Performing safe and efficient lane-changes is a crucial feature for creating fully autonomous vehicles. With a Deep Reinforcement Learning (RL), the autonomous agent can obtain driving skills by learning from trial and error without any human supervision. A method for learning
  28. 28. 28    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    lane-change policies via the Deep RL actor-critic algorithm5 is developed, wherein each lane- change policy describes actions selected to be taken by an autonomous vehicle. The Deep RL system implements an actor-critic neural network (NN) architecture. For better training stability, the actor-critic NN architecture is implemented with two neural NNs: an actor network and a critic network. This architecture decouples the action evaluation and the action selection process into two separate deep NNs. The actor network learns lane-change policies as a set of hierarchical actions. Each lane-change policy is a distribution over hierarchical actions. For example, the actor network processes the image data received from the environment to learn the lane-change policies as the set of hierarchical actions that are represented as a vector of a probability of action choices and a first set of parameters coupled to each discrete hierarchical action. The actor network collects transitions each comprising an observation (i.e., frame of image data), a hierarchical action, reward, a next observation (i.e., next frame of image data), and stores them in a replay memory. The critic network predicts action values via an action value function, evaluates a lane- change policy, and calculates loss and gradients to drive learning and update the critic network. For example, the critic network draws a mini-batch of the transitions each comprising an observation, hierarchical action, reward, and a next observation, collected by the actor network, from the replay memory and uses the mini-batch of transitions to improve the critic network prediction of the state/action/advantage value. Following figure shows a block diagram of an example implementation of the actor-critic network architecture. The actor-critic network architecture 102 is based on a Deep Recurrent Deterministic Policy Gradient (DRDPG) algorithm, which is a recurrent version of DDPG algorithm6 . The actor-critic network architecture 102 includes three sub-systems (dotted-line boxes), where each sub-system includes an instance of common elements: an input of image data (s) image data 129, CNN module 130, a set of region vectors 132, hidden state vector 133, attention network 134, spatial context vector 135 and a LSTM network 150. Each sub-system represents the actor-critic network architecture 102 being continuously applied to updated                                                              5 Actor-critic RL algorithm combines the stability of on-policy training (e.g., Policy Gradient) with the data efficiency of off-policy approaches (e.g., Deep Q-Network (DQN) learning) where the data efficiency of policy gradient is improved by training an off-policy critic. 6 DDPG algorithm combines Q-learning and Policy Gradients algorithm. The actor is a policy network that takes the state as input and outputs the exact (continuous) action, instead of a probability distribution over actions. The critic is a Q-value network that takes in state and action as input and outputs the Q-value. US20190004518 (Baidu) illustrates a Deep RL application in Unmanned Aerial Vehicle (Drones) using DDPG.
  29. 29. 29    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    information at different steps within a time series (t−1, t, t+1 . . . T). In other words, each dotted- line box represents processing by the actor-critic network architecture 102 at different instances in time. The actor-critic network architecture 102 includes both temporal attention module 160 that learns to weigh the importance of previous frames of the image data at any given image data 129, and spatial attention module 140 that learns the importance of different locations in the image at any given image data 129. The outputs generated by the actor-critic network architecture 102 are combined over a temporal dimension (via the 160) and over a spatial dimension (via the 140) to generate the hierarchical actions 172. The bracket associated with the spatial attention module 140 (spatial attention) represents repetitions of the actor-critic network architecture 102 along the spatial dimension. The spatial attention module 140 will detect the
  30. 30. 30    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    most important and relevant regions in the image for driving, and add a measure of importance to different regions in the image. The bracket associated with the temporal attention module 160 (temporal attention) represents repetitions of the actor-critic network architecture 102 along the temporal dimension. The temporal attention module 160 weighs the past few frames with respect to importance to decide the current driving policy. Processing of image data 129 begins when CNN module 130 receives and processes the image data 129 captured by sensors to extract features and generate a feature map/tensors (represented by the top big cuboid inside 130). The CNN module 130 can evaluate lane-change policies of the hierarchical actions 172 to generate an action-value function (Q). A set of region vectors 132 that collectively make up the feature map are extracted. Each region vector corresponds to features extracted from a different region/location in the convolutional feature maps/tensors (represented by the top big cuboid inside 130) that were extracted from the image data 129. The region vectors 132 are used by the spatial attention network 134 in calculating the context feature vectors 135, which are then processed by the LSTM 150 based temporal attention block 160. The LSTM outputs 152-1, 152-2 . . . 152-3 are then combined to generate a combined context vector (CT) 162. The combined context vector (CT) 162 is a lower dimensional tensor representation that is a weighted sum of all the LSTM outputs through T time steps (e.g., a weighted sum of all the region vectors). The combined context vector (CT) 162 is then passed through the fully connected (FC) layers 170 before computing the hierarchical actions 172 of the actor network. The hierarchical action space of hierarchical actions 172 designed for lane-change behaviors, which can includes various lane-change policies. To deal with lane-change behaviors in driving policies, it is necessary to make both high-level decisions on whether to have lane- changes as well as do low-level planning regarding how to make lane-changes. For example, there can be three mutually exclusive, discrete, high-level actions: left lane-change, lane following, and right lane-change. At each time step, a driver agent must choose one of the three high-level actions to execute. During training, the system will learn to choose from the three high-level action decisions and apply proper parameters specific to that action.
  31. 31. 31    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    Vehicle Trajectory Planning/ US20200174490 Autonomous vehicles include the environmental sensing systems that monitor the environment of a vehicle. The sensing systems generate sensor data from the sweeps that characterize aspects of the current environment of the vehicle. In order to make effective driving decisions, the computing system of an autonomous vehicle processes information about a vehicle's environment along with navigation data and information about previous locations of the vehicle, to determine a planned trajectory of the vehicle. The planned trajectory can indicate a series of waypoints that each represent a proposed location for the vehicle to maneuver to at a time in the near future. The computing system selects waypoints taking into account an intended route or destination of the vehicle, safety (e.g., collision avoidance), and ride comfort for passengers in the vehicle. Following figure depicts a block diagram of an example computing environment 100 of an autonomous vehicle. The environment 100 includes a neural network system 102, a trajectory management system 114, an external memory 120, a navigation planning system 126, and a vehicle control system 128. The neural network system 102 processes input data components 108, 110, and 112, to score a set of locations in a vicinity of a vehicle. The score for a given location can represent a likelihood that the location is an optimal location for inclusion as a waypoint in a planned trajectory for the vehicle. The trajectory management system 114 processes the scores from the neural network system 102 to select one of the locations as a waypoint for a given time step in the planned trajectory. The trajectory management system 114 also interfaces with external memory 120, which stores information about previous locations of the vehicle and locations previously selected for a planned trajectory.
  32. 32. 32    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    The neural network system 102 and the trajectory management system 114 interact with a navigation planning system 126, which generates navigation data 112 representing a planned navigation route of the vehicle. The planned navigation route can indicate the goal locations of the vehicle's travel and, optionally, a targeted path to arrive at the goal locations. The neural network system 102, in coordination with the trajectory management system 114, generates a planned trajectory for a vehicle by determining, at each of a series of time steps, a waypoint for the planned trajectory at the time step. At each time step, the system 102 processes neural network inputs that include waypoint data 108, environmental data 110, and navigation data 112
  33. 33. 33    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    to generate a set of scores that each correspond to a different location in a vicinity of the vehicle. The waypoint selector 116 of the trajectory management system 114 can then select one of the scored locations in the vicinity of the vehicle as a waypoint for the planned trajectory at the current time step based on the set of scores generated by the neural network system 102 by selecting the location corresponding to the highest score in the set of scores. The planning system 126 controls the operational mode of a vehicle. For example, during a first mode for normal autonomous driving, the planning system 126 activates the trajectory management system 114 to use planned trajectories determined using the neural network system 102 as the basis for maneuvering the vehicle. In order for a vehicle to maneuver along a planned trajectory, a sequence of control actions can be performed in the vehicle control system 128 to cause the vehicle to follow the trajectory. The control actions can include steering, braking, and accelerating.
  34. 34. 34    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    C. Deep Learning for 5G Technology Innovation Status Patents that specifically describe the major deep learning applications in 5G are a good indicator of the deep learning for 5G innovations in a specific innovation entity. To find the deep learning for 5G technology innovation status, 24 published patent applications in the USPTO that are related to the key deep learning for 5G technology innovation are selected for detail analysis. Following figure shows the deep learning for 5G patent application landscape with respect to the innovation entity. As shown in the figure, deep learning for 5G innovation entities are Samsung Electronics, Verizon. Virginia Tech, Ball Aerospace & Technologies, Beijing University of Posts and Telecommunications, Bluecom Systems and Consulting, CableLabs, DeepSig Inc., Futurewei Technologies, Google, Incelligent P.C., Korea Advanced Institute of Science and Technology, Micron Technology, Parallel Wireless, Inc., QoScience, Inc., Qualcomm, and Sony.
  35. 35. 35    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    Following figure shows the deep learning for 5G patent application landscape with respect to the key technology innovation field. As shown in the figure, MIMO is the most innovated deep learning for 5G technology followed by Cell Coverage Planning, Network Performance Optimization (QoS), Radio Resource Management, Radio Access Network (RAN), RF system, Adaptive Traffic Scheduler, Cognitive Radio (SDR), Communication Channel Modeling, Controlling Spectral Regrowth, Dynamic Software Defined Networking (DSDN), Network, Traffic Optimization, Predicting Received Signal Strength, Proactive Network Management, Transceiver Beam Management, and Wireless Scene Identification. Patent information can provide many valuable insights that can be exploited for developing and implementing new technologies. Patents can also be exploited to identify new product/service development opportunities.
  36. 36. 36    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    MIMO Adaptive Antenna/ US20190372644 For next generation cellular systems such as 5G wireless communications system, efficient and unified radio resource management is desirable. To decrease propagation loss of the radio waves and increase the transmission coverage the 5G system implements the base station antenna that is the massive multiple-input multiple-output (MIMO) beam forming array antenna. US20190372644 illustrates a method for controlling and optimizing the broadcast beam for the base stations (jointly tune the antenna beam width and e-tilt angle). In a dynamic scenario where user equipments (UEs) are assumed to be moving according to some mobility pattern in a given cell of base station (BS) of 5G system, the antenna beam selection problem using deep reinforcement learning (Deep RL) includes two parts such as offline training and online deployment. The offline training part is to learn the UE distribution pattern from history data and to teach the neural network on the UE distribution pattern. After obtaining typical UE distribution patterns, these patterns together with ray-tracing data can be used to train the Deep RL network. After the neural network is trained, it can be deployed to provide beam guidance for the network online. In the Deep RL terminology, selecting the beam parameters (beam shape, tilt angles) can be regarded as the action. The measurements from the UEs in the network can be regarded as the observations. Based on the observations, several reward values can be calculated. The reward can be the total number of connected UEs in the network based on the state and action taken. In DQN-based Deep RL, the Q-values are predicted using deep neural network. Input to the neural network is the UEs' state of the Deep RL environment, and output is the Q- values corresponding to the possible actions. Two identical neural networks are used in predicting the Q-values. One is used for computing the running Q-values--this neural network is called the evaluation network. The other neural network, called the target neural network is held fixed for some training duration, say for M episodes, and every M episode the weights of the evaluation neural network is transferred to the target neural network. Following table shows the detailed algorithm of the BS antenna beams are selected in dynamic scenarios for multiple sector case, where each sector can independently select own beam parameters to maximize the overall network coverage.
  37. 37. 37    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    MIMO Communication Channel/ US20190274108 A large-scale MIMO system with high spectrum and energy efficiency is a very promising key technology for 5G wireless communications. For the large-scale MIMO system, accurate channel state information (CSI) acquisition is a challenging problem, especially when each user has to
  38. 38. 38    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    distinguish and estimate numerous channels coming from a large number of transmit antennas from BSs in the downlink as the overhead for channel estimation is linearly proportional to the number of antennas. US20190274108 illustrates a cost effective and reliable MIMO communication channel estimation method using deep learning technology. In communication channel estimation, superimposed pilot subcarriers are used due to the fact that there is no need to increase the number of subcarriers used even as the number of antennas used increases. Especially for FDD channel estimation, using superimposed pilot subcarriers, instead of traditional orthogonal subcarriers, can reduce the number of subcarriers required down from M*Np to Np; (M is the number of transmitter antennas; Np is the number of subcarriers required per Tx antenna). Exploiting deep learning to learn DL (downlink) channel estimation, a cost effective and reliable MIMO communication channel estimation can be possible: Offline training the first deep learning network to learn DL channel estimation is performed. The user equipment (UE) can use this process to obtain actual DL channel estimation during real-time operation (online) based on received inputs from the base-station side. Subsequently, via feeding these inputs to the offline trained deep-learning network, the UE can feed the learned DL channel estimation and obtain the compressed encoded feedback. Then, the compressed encoded feedback is sent back by the UE to the BS to minimize the UL feedback cost. The BS feeds the received compressed encoded feedback to the second half of trained deep learning network to recover the DL channel estimation as obtained by the UE. Once the DL channel estimation is recovered, the BS can design the optimal precoding matrix accordingly. Via this procedure, both DL and UL transmission cost can be significantly reduced in the massive-MIMO context. Following figure shows an example deep learning network using the Convolutional Neural Network (CNN) model for DL (DL-CNN) channel estimation. The DL-CNN channel estimation is running per time slot per subcarrier. Assuming the training duration is T, and the number of training vectors (including input & channel vectors) is Y. The CNN will be trained offline with T*Y samples. The offline training process will be repeated to adapt to channel change. The interval is based on running performance. Channel multi-path parameters are acquired via channel sounding.
  39. 39. 39    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    Radio Access Network (RAN)/ US20200178093 A flexible and dynamic RAN can perform the end-to-end self-configuration of the RAN, sense the quality of the user's experiences and network performance in real time, and perform predictive analysis and intelligent optimization. According to application scenarios supported 5G wireless communication system, the RAN slices can be classified into four typical types, including a wide-area seamless coverage slice, a hot-spot high-capacity slice, a low-power large-connection slice and a low-delay high-reliability slice. In practical network deployment, a new slice type can be added according to the network requirements. US20200178093 illustrates the RAN performance evaluation of the existing slices7 using deep learning technology. The RAN performance evaluation is performed for the existing slices in the network to reduce the complexity of network operations and the instability of network performance that are caused by the frequent change of the global network configuration. The evaluation module evaluates the network in the current network according to historical wireless transmission data of the network and the terminal measurement report that are collected by the data collection processor, performs a reserving, adding or deleting operation for the existing slices in the network, so as to determine the global slice setting. The evaluation process includes: according                                                              7 The network performance includes a set of network spectrum efficiency, network transmission delay, network connection number, network power consumption efficiency, network delay jitter, and extreme transmission speed.
  40. 40. 40    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    to real-time wireless transmission data of the network and the terminal measurement data that are collected by the data collection processor from the RAN, and by using a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN), extracting a spatial feature and temporal feature of the data respectively as show in the following figure, obtaining a network performance status level corresponding to the current slice, and reporting it to the centralized evaluation module. For different types of slices, the weight values of different network performance status levels in the set are different. The CNN and the RNN can provide a mapping relationship between wireless transmission data, terminal measurement data and network performance statuses through a training process of historical measurement data of the network. According to the real-time wireless transmission data and terminal measurement data, a network performance status level can be obtained: According to the historical wireless transmission data and the terminal measurement data obtained in the RAN and by using the CNN and the RNN, the spatial feature and temporal feature of the data are extracted respectively. Then, the network performance status level corresponding to the historical wireless transmission data and the terminal measurement data is determined. The determined network performance status is compared with the actual network performance status, and the values of the weight parameters of the CNN and the RNN are adjusted according to a comparison result. Based on the RAN performance evaluation, RAN self-optimization within the slice can be done through Deep Reinforcement Learning (DRL). The current networking mode and
  41. 41. 41    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    resource allocation of the slice are used as status variables of the DRL and are input to a node supporting the training of the deep learning model. By using the service level and resources utilization efficiency obtained based on the historical resource configuration scheme in the data collection processor, an intra-slice resource allocation adjustment strategy is output to realize the highest network resource utilization efficiency and the best service performance level. The to-be- adjusted resource allocation ratio of each slice is used as an action variable of the DRL. After the adjustment of the resource allocation, the network resource utilization efficiency collected by the data collection processor and the performance indicators of each slice are used as DRL reward variables. Then, the online learning is performed through a Deep Q Network (DQN). Following figure shows a flowchart illustrating a process of deriving a resource adjustment strategy within a slice by using a DRL training model.
  42. 42. 42    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/   
  43. 43. 43    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    D. Deep Learning for Cybersecurity Technology Innovation Status Patents that specifically describe the major deep learning applications in cybersecurity are a good indicator of the deep learning for cybersecurity innovations in a specific innovation entity. To find the deep learning for cybersecurity technology innovation status, 31 published patent applications in the USPTO that are related to the key deep learning for cybersecurity technology innovation are selected for detail analysis. Following figure shows the deep learning for cybersecurity patent application landscape with respect to the innovation entity. As shown in the figure, deep learning for cybersecurity innovation entities are Oracle, IBM, Intel, Microsoft, NEC, Accenture, BAE Systems, Bank of America, Boeing, Cisco, Deep Instinct Ltd, Electronics and Telecommunications Research Institute, ESTsecurity Corp., Fortinet, Inc., F-Secure Corporation, General Electric, Infoblox Inc., Royal Bank of Canada, Samsung Electronics, Saudi Arabian Oil Company, Shape Security, Sophos Limited, Trend Micro Incorporated, UT-Battelle, LLC, and ZingBox, Inc.
  44. 44. 44    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    Following figure shows the deep learning for cybersecurity patent application landscape with respect to the key technology innovation field. As shown in the figure, Malware Dection is the most innovated deep learning for cybersecurity technology followed by Network Security, Attack Activity/Event Detection, Data Security, False Tenant Model, IoT Device Security, Malicious Webpage Detection, Security Alerts, Security Information Analysis, Threat Detection, and Threat/Vulnerability Assessment. Patent information can provide many valuable insights that can be exploited for developing and implementing new technologies. Patents can also be exploited to identify new product/service development opportunities. Industrial IoT Cyber-Attack Detection/ US20180159879 Industrial IoT control systems that operate the industrial asset systems (e.g., power turbines, jet engines, locomotives, autonomous vehicles, etc.) are vulnerable to threats, such as cyber-attacks (e.g., associated with a computer virus, malicious software, etc.). Current methods primarily consider threat detection in Information Technology (such as, computers that store, retrieve, transmit, manipulate data) and Operation Technology (such as direct monitoring devices and
  45. 45. 45    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    communication bus interfaces). However, cyber-threats can still penetrate through these protection layers and reach the physical domain of the industrial asset systems as seen in 2010 with the Stuxnet attack8 . Such attacks can diminish the performance of an industrial asset and can cause a total shut down. US20180159879 illustrates a desirable method to protect an industrial asset from such cyber- attacks in an automatic, rapid, and accurate manner. Following figure shows a high-level architecture of the cyber-threats protection system 100. The system includes a "normal space" data source 110 and an abnormal or "threatened space" data source 120. The normal space data source 110 stores, for each of a plurality of "monitoring nodes" 130, a series of normal values over time that represent normal operation of an industrial asset. The threatened space data source stores, for each of the monitoring nodes 130, a series of threatened values that represent a threatened operation of the industrial asset (e.g., when the system is experiencing a cyber-attack).                                                              8 https://www.mcafee.com/enterprise/en-us/security-awareness/ransomware/what-is-stuxnet.html
  46. 46. 46    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    Information from the normal space data source 110 and the threatened space data source 120 are provided to a threat detection model creation computer 140 that uses these data to create a decision boundary that separates normal behavior from threatened behavior. Then, the decision boundary is used by a threat detection computer 150 executing a threat detection model 155. The threat detection model 155 monitors streams of data from the monitoring nodes 130 comprising data from sensor nodes, actuator nodes, and/or any other critical monitoring nodes. Then, the threat detection model 155 calculates the deep learning features for each monitoring node based on the received data, and "automatically" output a threat alert signal to a remote monitoring devices 150, and then to a customer. Following figure shows an autoencoder 1400 that is used for the deep learning features. An encode process turns raw inputs 1410 into hidden layer 1420 values. A decode process turns the hidden layer 1420 values into output 1430. Note that the number of hidden nodes can be specified and correspond to number of features to be learned. The autoencoder estimates values for W-matrix (weight matrix) and b-vector (affine terms) from a first group of training data. The W-matrix and b-vector can then be used to calculate the features from a second group of training
  47. 47. 47    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    data. A non-linear decision boundary can be computed to the feature vectors. Then, the threat detection computer 150 uses the decision boundary to classify whether a feature vector of new data corresponds to "normal" or "attack" data. For high speed detection, a De-noised Auto- Encoder ("DAE") can be used. For example, the DAE learns features from normal and abnormal data sets. Through the input of many examples of normal data, the DAE algorithm learns a representation of the data set in reduced dimensions exploiting the PCA (Principal Component Analysis). This representation can be a non-linear transform of the raw data that encapsulates the information of the original data. Malicious Code Detection/ US20180285740 US20180285740 provides a deep learning application for detection of malicious code. Malicious code, like all code, is a form of language having concepts such as grammar and context, but has its own idiosyncrasies built in. Thus, the Natural Language Processing (NLP) of deep learning technology can be used to detect malicious code as a language processing problem. Following figure shows deep learning system 10 for detection of malicious code.
  48. 48.   ©2020 Te   The text and featu provided T recurrent converted each colu Beginnin convolut |F|, and w in a regu vectors [ these fea classifica benign co additiona E. Block To find b USPTO t detail ana Followin innovatio Sensorian Applied B                    9 The neura of 80 chara echIPm, LLC A editor 102 su ure vector ge d to the neura The neural ne t layers 110 d into a 1300 umn is of va ng with the in ional layer i with sequenc larized outpu , ], each atures are pro ation layer p ode. The neu al analysis. chain Privac blockchain p that are relat alysis. ng figure sho on entity. As nt, Inc., Alib Blockchain,                         al network ana acters and a seq All Rights Rese ubmits code enerator to ex al network 2 etwork 20 in (RNN/LSTM 0 x 80 matri alue 0, excep nput of a seq s applied, re ce length T'. ut F. Next, a with dimens ovided into a erforms a de ural network cy Technolog privacy techn ted to the ke ows blockcha s shown in th baba Group , and NEC L                     alyzes the input quence length erved http://w e segments re xtract a featu 0 as a input ncludes three M) and a cla x9 , where th pt for the col quence of on esulting in a At this poin a bidirection sion equival a classificatio etermination k data can be gy Innovatio nology innov y blockchain ain privacy p he figure, the , JPMorgan aboratories E t text on a per- of 1300 charac www.techipm eceived acro ure vector. T text. e types of lay ssification la he ith row rep umn that rep ne-hot encod new set of fe nt batch norm al LSTM is ent to the nu on layer, in t n of whether e stored on a on Status vation status n privacy tec patent applic e key blockc Chase Bank Europe Gmb character basis cters. m.com/  oss a network Then, the pr yers: convolu ayer 112. Be presents the presents the ded vectors X feature vecto malization 10 employed re umber of LS this case a si the input tex data storage , 35 publishe chnology inn cation landsc chain privacy k, Eygs LLP, bH. s, not words or k 150 to the ocessed cod utional layer efore input, t ith letter of character (o X=(e1, e2, . . rs F=(f1, f2, 06 can be ap esulting in tw TM hidden s igmoid func xt is malicio e, such as da ed patent ap novation are cape with res y innovation , Richard Po r commands. T input proces de segments i rs 108 (CNN the input tex the sample, one-hot enco . , eT), a . . . , fT′) of s plied, resulti wo hidden st states. Finall ction. The us code or atabase 180 f plications in selected for spect to the n entities are strel, SAP S There is a vocab 48  ssor is N), xt is and ding). size ing tate ly, for n the r IBM, Se, bulary
  49. 49. 49    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    Following figure shows blockchain privacy patent application landscape with respect to the key technology innovation field. As shown in the figure, Data Privacy is the most innovated blockchain privacy technology followed by Zero-Knowledge Proof, Transaction Privacy, Smart Contract Privacy , User Profile Sharing (KYC), IoT Privacy, Multi-Chain Privacy, Lightweight Blockchain Client Privacy, Privacy-Preserving Machine Learning, Data Sharing, Privacy- Preserving Shared Distributed Computing.
  50. 50. 50    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    Patent information can provide many valuable insights that can be exploited for developing and implementing new technologies. Patents can also be exploited to identify new product/service development opportunities. Anonymous Sharing of User Profile (KYC)/US20190028277 User consent-based information sharing is common. An example case is to share verified personal information for improving compliance with know your customer (KYC) standards. If one customer makes changes of personal information then those changes made to the customer data by one provider should be immediately visible to others. However, service providers' relationships with customers must not be revealed while sharing information. Additionally, when a customer decides to revoke access to his/her data to any service provider, that provider must not be able to view any updates to the customer's data thereafter. Creating such privacy mechanisms can be developed by storing a user profile in a blockchain by an authorized member of the blockchain, receiving a request by another authorized member of the blockchain to access the user profile, identifying the request for the user profile is from the another authorized member of the blockchain, creating a signed message that includes consent to share the user profile with the another authorized member of the blockchain, and transmitting the signed
  51. 51. 51    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    message to the another authorized member of the blockchain. The blockchain members are performed without revealing blockchain member identities of the authorized member of the blockchain and the another authorized member of the blockchain to any of the blockchain members. Following figure illustrates a configuration 100 for the anonymous sharing of a user profile. The configuration 100 includes a customer device 112, providers ‘A’ 116 and ‘B’ 118. The provider 116 is first provider that has an established account profile stored in a blockchain on behalf of the customer device 112. The provider 118 is the third party provider seeking access to
  52. 52. 52    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    the user credentials (i.e., name, profile information, verified identity documents etc.). The blockchain members 102-108 represent established leaders or consensus members to the blockchain 110. The providers can also be members to the blockchain and participate in consensus. In operation, the customer device 112 enrolls to membership services 114 which receive customer account information and store the information in a profile space of the blockchain. The provider 116 receives the customer information (KYC) 124. Then, the provider 116 creates a user profile based on a new account, updated account, etc. The information can be encrypted and stored 126 in the blockchain 110. The provider 116 can add/update the customer information to the blockchain after encryption using a newly generated symmetric key. A request to access the profile can thus be seen as a request to the blockchain for the decryption key. For anonymous key-sharing, to avoid peer-to-peer key exchange that can violate the privacy of providers, the key is exchanged through the blockchain itself. Relying on a trusted PKI, the profile decryption key is itself encrypted in such a way that only other authorized providers on the blockchain network will be able to decrypt the key. Subsequently, another provider 118 requests consent 128 and receive consent 132 from the customer device 112. The consent is a signed message that includes an encryption key or other information necessary to identify the customer profile and access the information. The third party provider 118 then uses the signed credentials to request the customer information 134 directly from the blockchain 110 without communicating with the original provider 116. The original provider 116 submits an encryption key 136 that can be used to decrypt the customer profile. The blockchain member or other processing entity grant the access 138 based on information stored in a smart contract stored in the blockchain. The information can then be received, decrypted and used by the third party provider 118. Following figure illustrates the configuration 150 for revoking the previously consented access right to the user profile.
  53. 53. 53    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    The customer device 112 initiates the revocation operation 152 by notifying the blockchain of a decision to revoke a previously granted access right. The provider 116, periodically, or via a trigger from an access revocation operation, generates a new key 154 and encrypt the customer profile via the new key 156. In operation, the previously privileged third party provider 118 now attempts to read the new key and customer profile 158 and decrypt the profile unsuccessfully 162. Signed consent and ACL-based access and revocation can include a blockchain using a signed message representing customer's consent to authorize a provider for read/write access to the customer profile. Access can be achieved without revealing a real identity of a provider to other members, otherwise the constraint will be violated. Upon revocation, the signed message on the blockchain is invalidated, and a different decryption key is generated to satisfy another constraint. The smart contract authorization decision is decentralized and is implemented in smart contract, and thus, ensures consensus among members. Consent signed by the customer must not explicitly identify provider B to satisfy the anonymous constraint. The customer must ensure that the consent request was originated by provider B and not replayed by an unauthorized party. The consent must be explicitly bound to the profile request so that it cannot
  54. 54. 54    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    be misused. The profile must be readable only by a provider possessing valid consent from the customer. An auditor must be able to unambiguously map the identity in the consent to a real identity. Consent revocation signed by a customer must not explicitly identify the third party provider. Future updates to customer profile should not be readable to that revoked service provider. Anonymous Transaction with Increasing Traceability/US20200134586 Some cryptocurrency transaction systems have higher anonymity and lower traceability, such as Bitcoin. A user can open an account without disclosing any real world identification. Each account is identified by a virtual wallet address. Account owners can then receive and remit cryptocurrencies from and to others. Transactions recorded in blockchains include senders' virtual wallet addresses, recipients' virtual wallet addresses, and remittance amounts. Such systems have high anonymity and low traceability, since there is no connection between account owners' real world identification and their virtual world identification. It is almost impossible to trace the real senders and recipients of transactions. As a result, such a system can become a money laundering facilitator. To avoid the cryptocurrency transaction system from becoming a money laundering facilitator, the system has to provide a mechanism to trace specific transactions when necessary while increasing anonymity. To increase anonymity, several measures can be used: Transaction sender is not allowed to access recipient's virtual identification so that it cannot easily trace a specific remittance transaction; Transaction recipient is not allowed to access sender's virtual identification. However, to construct a remittance transaction, both sender's virtual identification and recipient's virtual identification have to be used. Thus, the sender and recipient have to work cooperatively but at the same time avoid unnecessary information sharing to protect personal privacy. Since sender cannot access recipient's virtual identification, recipient has to provide a token to sender as a temporary replacement of recipient's virtual identification. Through this approach, a hacker cannot trace specific remittance transactions even if it can access the distributed ledger of the distributed transaction consensus network. Likewise, no party can trace specific remittance transactions because it cannot match both sender's and recipient's physical identification and virtual identification. To further increase anonymity, a signature on the encrypted message can be created and send them together to prove authentication of the message.
  55. 55. 55    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    To avoid the cryptocurrency transaction system from becoming a money laundering facilitator, the system has to provide a mechanism to trace specific transactions when necessary. A remittance transaction comprises a plurality of sub-transactions (“transaction set”), including sending transaction, inter-manager transaction, and delivery transaction. Either sender or recipient can create a label encrypted by using a tracing key for each sub-transaction. The label includes information to identify each sub-transaction in a transaction set and their sequences. A tracing key can be a symmetric key or an asymmetric key. When the tracing key is a symmetric key, each cryptocurrency transaction system creates and holds its own symmetric tracing key for encrypting labels. The symmetric tracing key is also shared with network administrator or an authorized/assigned node. The authorized or assigned node that performs the tracing function is considered as network administrator. When the tracing is an asymmetric key, network administrator or an authorized/assigned node creates a tracing key pair comprising a tracing public key and a tracing private key. The tracing public key is used by cryptocurrency transaction system to encrypt labels for each sub-transaction in a remittance transaction set. The tracing private is kept confidential by the network administrator. When necessary, a network administrator or an authorized/assigned node can decrypt the label and reconstruct the whole remittance transaction. Zero-Knowledge Proof for Digital Asset Transaction/US20200034834 Zero-knowledge proof is a cryptology technology such that is able to be designed to prevent an observer of the distributed ledger from determining any information of the transaction that is taking place. Yet, the transaction agent can use the proofs to verify access rights, correctness of the transaction and compliance of the operation according to the defined business rules and system state. Such proof is also used to update the system state without disclosing on the distributed ledger information about the participants to the transactions, the identifier that is subject of the transaction and logical links across participants, identifiers and transactions. Thus, the system eliminates the possibility of mining the data from the ledger to extract undesired data correlations and it protects against leak of business intelligence on the distributed ledger. Zero-knowledge proof algorithm can be combined with a digital asset publishing mechanism of blockchain in digital asset transfer transaction so that a node device in the
  56. 56. 56    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    blockchain can verify whether the asset type of an input digital asset is the same as the asset type of the output digital asset without providing any sensitive information to the verifier. In an unspent transaction output (UTXO) model, a digital asset transfer transaction is published on the blockchain to trigger destroying a digital asset held by an asset transferor and recreating a new digital asset for an asset transferee, so as to complete a transfer of the digital asset object between different holders. When the asset transferor needs to transfer the held digital asset in the blockchain, the asset transferor can input the output digital asset as input data to a commitment function for calculation to obtain a commitment value, and can generate, based on the zero-knowledge proof algorithm included in the blockchain, a proof used to perform a zero- knowledge proof on the commitment value. Then, the asset transferor can construct the asset transfer transaction based on the previous commitment value and the proof, and send the asset transfer transaction in the blockchain, to complete an asset object transfer. When receiving the asset transfer transaction, the node device in the blockchain can obtain the previous commitment value and the proof that are included in the asset transfer transaction, and then perform the zero-knowledge proof on the commitment value based on the previous proof by using the previous zero-knowledge proof algorithm, to verify whether the asset type of the input digital asset is the same as the asset type of the output digital asset in the asset transfer transaction. If it is determined, through verification, that the asset type of the input digital asset is the same as the asset type of the output digital asset in the asset transfer transaction, the commitment value corresponding to the output digital asset is published on the blockchain for deposit, to complete transfer of the digital asset between different holders. Because the asset transfer transaction includes only the commitment value that is generated through calculation by inputting the asset type of the output digital asset in the asset transfer transaction as input data to the commitment function, and the asset type of the output digital asset is not included in the asset transfer transaction in the form of plain text, the blockchain can hide the asset type of the digital asset transferred out by the asset transferor, and privacy of the asset transferor can be protected.
  57. 57. 57    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    F. Blockchain Interoperability Technology Innovation Status To find blockchain interoperability technology innovation status, 28 published patent applications in the USPTO that are related to the key blockchain interoperability technology innovation are selected for detail analysis. Following figure shows blockchain interoperability patent application landscape with respect to the innovation entity. As shown in the figure, the key blockchain interoperability innovation entities are Alibaba Group, Accenture, IBM, JPMorgan Chase Bank, Blockstream Co., Intel, Salesforce, Paypal, and Infineon Technology. Following figure shows blockchain interoperability patent application landscape with respect to the key technology innovation field. As shown in the figure, Cross-Chain Transaction is the most innovated blockchain interoperability technology followed by Cross-Chain Blockchain Domain Name, Interoperability Node, Cross-Chain Data Operation/Access, Oracle (Blockchain-
  58. 58. 58    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    Nonblockchain Interoperability), Interoperability Smart Contract, Smart Contract Reusability , Cross-Chain Cryptocurrency Swap, Distributed Ledger Gateway, and Interoperable Relay-Chain. Patent information can provide many valuable insights that can be exploited for developing and implementing new technologies. Patents can also be exploited to identify new product/service development opportunities. Interoperability Smart Contract / US20200099533 Blockchain interoperability enables multiple distributed ledger networks (DLNs) to provide data sharing, transferring, and synchronization between the DLNs. Following figure illustrates an example of the interoperable blockchain system 100. The system includes blockchain participants 102 that participate in a DLN 104. The blockchain participants 102 include full or partial nodes of the DLN 104. The DLN includes the participants of the DLN that access the
  59. 59. 59    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    blockchain 106. The blockchain 106 includes datablocks 107 that are linked cryptographically. The blockchain participants 102 includes a data furnisher 108 that furnishes particular information stored in the blockchain 106 to the receivers external to the DLN 104. The data receiver 110 includes a non-participant of the DLT network 104 or a participant of a separate DLN. Unlike the data furnisher 108, the data receiver 110 cannot have access to the blockchain 106 for the DLN 104. The data receiver 110 receives the token data stored in the blockchain 106 from the blockchain participants 102, such as the data furnisher 108. The blockchain participants 102 further include a membership service provider 112. The membership service provider 112 provides access to the identities and cryptological information associated with the blockchain participants 102 of the DLN 104. The membership service provider 112 includes a membership service repository 114. The membership service repository 114 includes a database that stores the identities and cryptological information associated with participants and non-participants of the DLN 104. The identities information includes IP addresses, MAC addresses, host names, user names, and/or any other information that identifies a participant or non-participant of the DLN 104. The cryptological information includes any information that is used to ensure the authenticity of a digital signature such as a public key that corresponds to a private key that is applied to generate a digital signature. The data furnisher 108 and the data receiver 110 communicate with the membership service provider 112 to receive the public key of the data furnisher 108. The data receiver 110 submits a message or query to the membership service provider 112. After receiving the public key, the data receiver 110 verifies the truth of token data shared by or exported from the DLN 104. For example, the data receiver 110 receives authorization information from the data furnisher 108. The authorization information includes a digital signature corresponding to the token data. The digital signature includes a certification that the data furnisher 108 and data receiver 110 consents to a particular action, such as exporting token data. The signer of the digital signature can be confirmed based on the public key that is paired with the private key used to sign the signature. Sharing /exporting information to/with the data receiver 110 present technical challenges: Ability for the data receiver 110 to verify that the token data is valid and authorized for sharing/export; Preventing double spend between the blockchain participants 102 of the DLN 104 and non-participants; and Ensuring synchronization of the token data between participants of the DLN 104 and non-participants of the DLN 104.
  60. 60. 60    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    In the following figure the data furnisher 108 has access to the furnisher blockchain 210 and the data receiver 110 has access to the receiver blockchain 212. The data furnisher 108 exports token data to the data receiver 110. The exportation of token data involves token data on the furnisher blockchain 210 and committing the token data to the receiver blockchain 212. Before the token data is locked on the furnisher blockchain 210 (adding the data block to the blockchain) and committed to the receiver blockchain 212, various preconditions, authorizations, and data manipulation can occur. The data furnisher 108 includes a furnisher synchronization controller (FSC) 302. The FSC 302 coordinates transfer and exportation of token data to a remote blockchain. For example, the FSC 302 communicates token data stored on the furnisher blockchain 210 to the data receiver 110 for storage on the receiver blockchain 212. The FSC 302
  61. 61. 61    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    can determine when a successful transfer is completed and lock the token data on the furnisher blockchain 210. The FSC 302 accesses an interoperability smart contract 304 to determine the criteria, conditions, and parameters that dictate exportation of token data between DLNs. The interoperability smart contract 304 includes an authorization to transfer data stored on the furnisher blockchain 210 according to a protocol for asynchronous communication between the furnisher DLN 202, the receiver DLN 204, and/or other DLNs. The interoperability smart contract 304 includes terms, conditions, logic, and other information that the data furnisher 108 and the data receiver 110 agree to. The interoperability smart contract 304 also includes identifiers corresponding to the token data in the furnisher blockchain 210 (data furnisher and data receiver 110 that consent to the export). Additionally, the interoperability smart contract 304 includes a cryptologic committal 306. The cryptologic committal 306 includes commit logic configured to cause the data receiver 110 to commit the data to the receiver blockchain 212. In general, token data can be considered committed when the token data is appended to the receiver blockchain 212. The commit record identifies the participants, DLNs, token data, and any other information that records the transfer event.
  62. 62. 62    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    The interoperability smart contract 304 further includes a transfer logic 308. The transfer logic 308 includes logic configured to cause the data receiver 110 to receive, generate, and append the token data to the receiver blockchain 212. For example, the transfer logic 308 can include instructions to require or validate information received by the data receiver 110. The transfer logic 108 determines whether, according to predetermined rules, valid token data is received by the data receiver. The transfer logic 308 also causes the data receiver 110 to re-create the token data in a manner that is compliant with the receiver DLN. The data furnisher 108 recreates the token data based on the transfer logic prior to sending the data to the data receiver. The transfer logic 308 appends the token data to the receiver blockchain 212 for compliance with the receiver DLT. The FSC 302 receives a pre-commit acknowledgement 310. The pre-commit acknowledgement 310 includes a verification that the token data was successfully received and generated by the data receiver 110. The pre-commit acknowledgement 310 indicates that the token data was successfully appended to the receiver blockchain 212. The pre-commit acknowledgement 310 can include digital signatures signed by the data receiver 110. For example, the data receiver 110 can be identified in the interoperability smart contract 304. The digital signatures can verify that the data is properly re-generated and added to the receiver blockchain 212 in compliance with the receiver DLN and the criteria of the interoperability smart contract 304. To synchronize the locking and committal of the token data transferred between DLNs, the FSC 302 encrypts the interoperability smart contract 304 such that the data receiver 110 is initially receives the interoperability smart contract 304 without the ability to perform the commit according to the committal logic. Additionally, the FSC 302 encrypts information that the interoperability smart contract 304 accesses to perform the committal. For example, the FSC 302 encrypts the cryptological committal, other portions of the interoperability smart contract 304, or information provided to the interoperability smart contract 304 based on a hash function and a committal key 312. In response to receipt of the pre-commit acknowledgement 310, the FSC 302 communicates the comital key to the data receiver 110. The data receiver 110 decrypts the interoperability smart contract 304, or other authorization provided to the interoperability smart contract 304, and performs the committal according the committal logic.
  63. 63. 63    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    Following figure illustrates the flow diagram for example logic of the system 100. The FSC 302 obtains the interoperability smart contract 304 (402). The interoperability smart contract 304 includes the cryptologic committal 306. The interoperability smart contract 304 includes commit logic configured to cause the data receiver 110 to commit the token data to the receiver blockchain 212. The commit logic and the interoperability smart contract 304 can be encrypted based on a predetermined committal key 312. The FSC 302 appends the interoperability smart contract 304 to the furnisher blockchain 210 (404). For example, the FSC 302 can add a datablock to the furnisher blockchain 210 that includes the interoperability smart contract 304. The datablock further includes a hash of a previous datablock stored on the blockchain. The FSC 302 sends the interoperability smart contract 304 to the data receiver (406). For example, the FSC 302 can send the interoperability smart contract 304 to the data receiver 110 and another participant of the receiver DLN 204. The FSC 302 sends the token data with the interoperability smart contract 304. Alternatively, the transfer logic 308 of the interoperability smart contract 304 includes instructions configured to regenerate the token data. The FSC 302 receives the pre-commit acknowledgment of the interoperability smart contract 304 (408). In response to the pre-commit acknowledgement, the FSC 302 locks the data on the furnisher blockchain 210 (410). The FSC 302 sends the committal key 312 to the data receiver 110, or some other participant of the receiver DLN (412). The data receiver 110 can unencrypt the cryptological committal and perform the committal in response to receipt of the committal key 312.
  64. 64. 64    ©2020 TechIPm, LLC All Rights Reserved http://www.techipm.com/    Transferring Digital Asset Using Sidechain / US20160330034 (Blockstream Corp) A sidechain is a separate blockchain that is attached to its parent blockchain using a two-way peg, which enables interchangeability of digital assets between the parent blockchain and the sidechain. Plasma and Polkadot for Ethereum and RSK for Bitcoin are some good examples of the sidechain. Transfers using the pegged sidechains are atomic; the transfer either happens entirely, or not at all. Another benefit of using pegged sidechains is avoidance of failure modes that result in loss or permit fraudulent creation of assets. Assets are transferred to the pegged sidechains by providing proofs of possession in the transferring transactions themselves, avoiding the need for nodes to track the sending chain. On a high level, when moving assets from one blockchain to another, a transaction is created on the first blockchain locking the assets. A transaction is also created on the second blockchain whose inputs include a cryptographic proof that the lock transaction on the first blockchain was done correctly. These inputs are tagged with an asset type, e.g. the genesis hash of the asset's originating blockchain.

×