I.INTRODUCTION

Artificial intelligence (AI) and machine-learning (ML) are interdisciplinary domains that have found support and applications in all domains of science, technology, and engineering. They have transformed the traditional model-based design and development process to a data driven and learning process. On one side, AI is driven by the domains needs. On the other side, AI is applied in these domains to augment the performance and capacities of many applications.

II.AI IN COMPUTER VISION AND NATURAL LANGUAGE PROCESSING DOMAINS

For the computer vision-based applications, AI enables computers and systems to derive meaningful information from digital images, videos, and other visual inputs and take actions or make decisions based on the low-level visual stimulus. Backed by AI and ML, computers can mimic our human beings to see, observe, and understand.

Natural language processing (NLP) is also a branch of AI that focuses on helping computers to understand the way that humans write and speak. Real-world use case applications of NLP include but not limited to:

  • •Voice-controlled assistants like Siri and Alexa.
  • •Question answering by customer service chatbots.
  • •Streamlining the recruiting process by scanning through skills and experience listed in the resumes.
  • •Tools for correcting errors and making suggestions for simplifying writing.
  • •Language models that predict the next words in a text, based on what has already been typed.

In the following, one topic in computer vision and two NLP-based ongoing research applications are introduced.

A.EDGE DETECTION

Among many topics in the field of computer vision and image processing research, edge detection plays an important role in a wide range of fields from satellite imaging to medical screening, object recognition. It is an image processing technique to find boundaries/edges in a digital image with discontinuities that present a global view of an image with the most critical outline of the image. Robust edge-based shape features provide more concrete analysis for computer vision applications.

Traditional edge detection methods exploit low-level visual cues to construct hand-crafted features then classify the edge and nonedge pixels using threshold-based methods. The results of conventional approaches lack semantics at the object level. Nowadays, convolutional neural network-based approaches have become mainstream in the image processing domain. Among deep network-based edge detection methods, holistically nested edge detection (HED) [1] is one of the successful frameworks. It produces five intermediate side outputs and performs deep supervision along the network pathway. Its final fused result achieves performance within a 2% gap to human vision. Since then, several approaches use a similar architecture to further improve the accuracy. These efforts mainly focus on improving the quality of intermediate outputs or enhancing the deep supervision strategies. However, these approaches fuse intermediate layers without considering hierarchical edge importance within each side output. This poses the dilemma to the network: to include desired features, it has to accept many unwanted data and vice versa. Consequently, the result often contains more noise and thick edges while missing some key boundaries.

To tackle this issue, Scale-Invariant Salient Edge Detection (SISED) framework [2] can locate and extract the important Scale-Invariant Salient Edge (SISE) as the subset of each side output without increasing the network complexity. The normalized Hadamard Product is the key operation of SISED where a multiplicative operation is applied to promote mutually agreed features across multiscale side outputs while suppressing those with weak scale expression. SISED computes the edge importance hierarchically to enhance the edge results and reaches state-of-the-art performance.

Moving forward, the research of finding robust edges is still ongoing. Recently, vision transformer (ViT)-based model [3] demonstrated better performance and greater efficiency than CNNs on image classification. With its powerful tokenized queries, self-attention mechanism, and encoding-decoding strategy within transformers, it is possible that ViT framework can be applied into edge detection. For example, visual saliency transformer (VST) [4] is able to extract object contours and shows a new paradigm for transformer-based edge detection models.

B.BERT-BASED NEGOTIATION CHATBOT

Business negotiations are often hard due to conflicts of involved parties. Some negotiations can be not only time consuming but also negative, resulting in the damages of business relationships when unexpected negative emotions grow [5]. A solution to these problems is to automate negotiations with a robot, a negotiation chatbot using the BERT framework, which was developed for NLP.

The BERT (Bidirectional Encoder Representations from Transformers) is a deep learning natural language representation model [6] that has a powerful bidirectional prediction and contextual understanding feature. This model involves two main steps: pretraining and fine-tuning data. The BERT model is pretrained on an enormous amount of unlabeled data. The model allows high performance when it is fine-tuned to a specific task through additional training. The BERT model trains data by performing 2 tasks: MLM (Masked Language Model) that is used to predict the missing word(s) in close vicinity within a sentence, and NSP (Next Sentence Prediction) that is effective for the question/answer task.

The first step for this chatbot is fine-tuning the model to the negotiation task, using more than a thousand bilateral negotiations experimentally conducted globally [7]. Next, the MLM is extended to predict whether a negotiation warranted a positive or negative result. Finally, the NSP is used to generate automated responses to “chat” with a human counterpart in a negotiation setting. This BERT-based negotiation chatbot could be utilized as a business representative who can sense and foresee the positive and negative atmosphere during the negotiation, and strategically react to the opponent. The goal is to help the two parties reach a potential agreement that both parties are satisfied with.

C.PHARMBERT: A PRETRAINED LANGUAGE MODEL FOR PHARMACEUTICAL ERROR PREDICTION

Total number of retail prescriptions filled annually in the USA has reached 4.69 billion in 2021 [8]. However, the tracking over the service quality of the dispensation process is still very limited [9]. In an effort to address factors that lead to quality-related events, some healthcare organizations and governments adopt error-reporting systems. Such reporting systems have collected pharmaceutical errors that either reach patients (incident events), such as incorrect drug, dose, or quantity, or are intercepted at pharmacies (near miss events) [10].

To discover common contributing factors that may have led to quality-related events, large-scale analysis of these event is crucial [11]. Many common factors in retail pharmacies that resulted in an incident may not be obvious to the human eye and traditional data-mining solutions. With the progress of deep learning in NLP, the development of effective mining has been boosted, including the field of extracting valuable latent information from pharmaceutical documents.

In this research, BERT is utilized to make predictions on the pharmaceutical transaction data (collected by a Canadian error-reported system). To fit pharmaceutical data with the BERT model, the event information are formatted into Natural Language tokens and fine-tuned on the pretrained BERT model. The trained pharm BERT model is able to achieve an accuracy of ∼84% when predicting whether an event would result in a near miss (caught beforehand) or an incident (caught afterwards). By using this model, it is possible to further predict other aspects of the event, such as what stage of the events (prescribing, transcribing, dispensing, administration, storage, and monitoring) the incident occurs or what category of issues the event falls under. It is optimistic that the findings from this study could lead to solutions to reduce pharmaceutical incidents and provide improvements in patient safety.

Continuing with our strengths in supporting fundamentals in AI development and in industrial applications from the previous issue [12], we selected five papers in these two domains. In the following sections, we present an overview of the papers in this issue that support the education of AI and apply AI to extend and improve performance and capacities in several application domains. These papers include both academic research papers and industrial application papers.

III.DATA COLLECTION IN CLINICAL TRIALS

The paper selected in this area introduced the data collection in clinical trials and their applicability in the actual process of drug development for the real-world patients in the routine clinical practice. Randomized clinical trials (RCTs) have been considered the gold standard for regulatory approval in the drug development. However, RCTs may not be feasible in some diseases and under certain situations. In such cases, findings from RCTs may not be generalized to real-world patients in the routine clinical practice. Real-world evidence (RWE) generated from various real-world data has become more and more important for the drug development and clinical decision making in the digital era. This paper described real-world data collection, RWE, and its generation, followed by the characteristics and differences between RCTs and RWE studies. In addition, the challenges and limitations of real-world data and RWE studies were discussed in this paper [13].

IV.FUNDAMENTALS SUPPORTING AI DEVELOPMENT

We selected two papers in this topic area that support the fundamentals issues in the development and improvement of artificial intelligence research and education.

The first paper in this section addresses the current challenge in creating practice questions for programming learning, which requires the instructor to manually organize heterogeneous learning resources. This paper proposed a semantic programming question generation model that aimed to help the instructor automatically generate new programming questions as well as their assessment. The programming question generation model was designed to transform conceptual and procedural programming knowledge from textbooks into a semantic network by the Local Knowledge Graph and Abstract Syntax Tree. For any given question, the model queries the established network to find related code examples and generates a set of questions by the associated Local Knowledge Graph and Abstract Syntax Tree’s semantic structures. Analysis was conducted to compare instructor-made questions textbook questions. The paper also studied the breadth and depth of question quality generated from the model and also to investigate the complexity of the questions in relations to the student performances [14].

The second paper in this section deals effectively teaching cyber security class in a simulated environment. It described an application of the problem-based learning (PBL) methodology to enhance professional training-based cyber security education. The authors developed an online laboratory environment to apply PBL with knowledge graph-based guidance for hands-on labs in cyber security course teaching. Learners were provided access to a virtual lab environment with knowledge graph guidance to simulate real-life cyber security scenarios. Thus, they were forced to think independently and apply their knowledge to create cyber-attacks and defend approaches to solve problems provided to them in each lab assignment. The experimental study showed that the learners tend to gain more enhanced learning outcomes by leveraging PBL with knowledge graph guidance, they become more aware of cyber security and relevant concepts, and they express interest in keep learning of cyber security using our system [15].

V.AI IN INDUSTRIAL APPLICATIONS

We selected two papers in this topic area that applies big data and machine-learning concepts and techniques to implement and improve the other research and industry domains.

The second paper in this topic area investigated the problem of key radar signal sorting and recognition in electronic intelligence. A combined approach based on clustering and PRI transform algorithm was developed to solve this problem. Using the traditional methods based on Pulse Description Word, this problem is not efficiently addressed. The proposed solution addressed the problem in three steps: first, presorting is carried out by a clustering algorithm. Then the pulse repetition interval estimates of each cluster are obtained by the pulse repetition interval transform algorithm. Finally, the matching between various pulse repetition interval estimates and key targets is assessed. Simulation results showed that the proposed method improved the time efficiency of key signal recognition and dealt with the complex signal environment with noise interference and overlapping signals [16].

The third paper in this topic area studied the automated control systems and their calibration in industrial process and in artificial intelligence and robotics systems. The main problem studied in this paper is on contact high-temperature strain precision measurement. The study established an automatic calibration device for high-temperature strain gauges. The device automatically controls the temperature of the high-temperature furnace. Based on this calibration device, the high-temperature strain measurement accuracy correction software is developed to calculate the high-temperature strain gauge with multiparameters. The curves of sensitivity coefficient, thermal output, zero drift, and creep characteristics with temperature are obtained, and a strain measurement accuracy compensation model is implemented in the software. The high-temperature strain measurement experiment is carried out to verify that the modified model can meet the requirements in each temperature range [17].