Naga Sai Krishna Adatrao, Gowtham Reddy Gadireddy and Jiho Noh, Department of Computer Science, Kennesaw State University, 1100 South Marietta Pkwy SE, Marietta, USA
Conversation Search (ConvSearch) is an approach to enhancing the information retrieval method where users engage in a dialogue for the information-seeking tasks. This survey focuses on the human interactive characteristics of the ConvSearch systems, highlighting the operations of the action modules, likely the Retrieval system, Question-Answering, and Recommender system. Along with the action modules, we describe ConvSearch as it pertains to more specific research problems in knowledge bases, natural language processing, and dialogue management systems. We further discuss the aims of ConvSearch in the biomedical and healthcare fields for the utilization of clinical social technology. Finally, we discuss the challenges and issues of ConvSearch, particularly in BioMedicine. This survey aims to provide an integrated and unified vision of the ConvSearch components from different fields, which can benefit the information-seeking process in healthcare systems and in general.
Conversational Information Retrieval, Information Retrieval, Recommender System, Question-Answering, Interactive Information Systems, Dialogue Systems, Natural Language Processing.
Valerio Bellandil1 and Stefano Siccardi2, 1Department of Computer Science, Università degli Studi di Milano, Italy, 2Consorzio Interuniversitario Nazionale per l’Informatica, Italy
This paper proposes a conceptual structure for a repository of entities that can be found by usual procedures of Natural Language Processing, that is the search for entities mentioned in text, their identification, possibly through the link to entries in Background Knowledge Basis (BKG) and the construction of a Knowledge Basis or Graph to host the information found in this process. We address applications where a BKG is of little help, because the involved entities are not so relevant to be included in any, being for instance ordinary people or small companies. Therefore, we rely on the entities’ attributes and relationships for unique identification, disambiguation, knowledge checking and any other relevant operation. One of the final goals achieved by the proposed method is the ability to merge knowledge collected in separate bases, once they are referred to the same Entity Registry.
Named Entity Recognition, Named Entity Linking, Knowledge Basis, Knowledge Graphs.
Li Kai1 LiNing1 Zhang Wei1 Gao Ming1, 1D School of Computer Science+Beijing Advanced Innovation Center for Materials Genome Engineering, Beijing Information Science and Technology University, Beijing China
Based on the Traditional Vae, a Novel Neural Network Model is Presented, With the Latest Molecular Representation, Selfies, to Improve the Effect of Generating New Molecules. In This Model, Multi-layer Convolutional Network and Fisher Information Are Added to the Original Encoding Layer to Learn the Data Characteristics and Guide the Encoding Process, Which Makes the Features of the Data Hiding Layer More Aggregated, and Integrates the Long Term and Short Term Memory Neural Network (Lstm) Into the Decoding Layer for Better Data Generation, Which Effectively Solves the Degradation Phenomenon Generated by the Coding Layer and Decoding Layer of the Original Vae Model. Through Experiments on Zinc Molecular Data Sets, It is Found That the Similarity of the New Vae is 8.47% Higher Than That of the Original Ones. Selfies Are Better at Generating a Variety of Molecules Than the Traditional Molecular Representation, Selfies. Experiments Have Shown That Using Selfies and the New Vae Model Presented in This Paper Can Improve the Effectiveness of Generating New Molecules.
VAE, Molecular notation, Multilayer convolutional network, Fisher information, LSTM .
Haritha G B and Sahana N B, Department of Electronics and Communication Engineering, PES University, Bengaluru, Karnataka, India
The cryptocurrency ecosystem has been the centre of discussion on many social media platforms, following its noted volatility and varied opinions. Twitter is rapidly being utilised as a news source and a medium for bitcoin discussion. Our algorithm seeks to use historical prices and sentiment of tweets to forecast the price of Bitcoin. In this study, we develop an end-to-end model that can forecast the sentiment of a set of tweets (using a Bidirectional Encoder Representations from Transformers - based Neural Network Model) and forecast the price of Bitcoin (using Gated Recurrent Unit) using the predicted sentiment and other metrics like historical cryptocurrency price data, tweet volume, a users following, and whether or not a user is verified. The sentiment prediction gave a Mean Absolute Percentage Error of 9.45%, an average of real-time data, and test data. The price prediction gave a Mean Absolute Percentage Error of 3.6%.
Price prediction, Bitcoin, BERT, GRU, Twitter.
Akshata Phadte, Department of Information Technology, P.E.S’s R.S.N College of Arts Science,Farmagudi-Goa
The evolution of information Technology has led to the collection of large amount of data Code-Mixed data , the volume of which has increased to the extent that in last two years the data produced is greater than all the data ever recorded in human history. This has necessitated use of machines to understand, interpret and apply data, without manual involvement. A lot of these texts are available in transliterated code-mixed form, which due to the complexity are very difficult to analyse. Code-Mixing is the mixing of two or more languages or language varieties in speech. Apart from the inherent linguistic complexity, the analysis of code-mixed content poses complex challenges owing to the presence of spelling variations and non-adherence to a formal grammar. However, for any downstream Natural Language Processing task, tools that are able to process and analyse code-mixed social media data are required. Currently there is a lack of publicly available resources for code-mixed EnglishKonkani-Marathi social media data, while the amount of such text is increasing every day. The lack of a standard dataset to evaluate these systems makes it difficult to make any meaningful comparisons of their relative accuracies. In this paper, we describe the methodology for the creation of a normalisation dataset for EnglishKonkani-Marathi Code-Mixed social media Text (CMST). We believe that this dataset will prove useful not only for the evaluation and training of normalisation systems but also help in the linguistic analysis of the process of normalisation Indian languages from native scripts to Roman. Normalisation refers to the process of writing the text of one language using the script of another anguage whereby the sound of the text is preserved as far as possible.
Code-Mixing, Social Media Text, Normalisation, Natural Language Processing.
Boping Ding1 and Xin Su2, 1Department of Electronic Information, Chongqing University of Posts and Telecommunications, Chongqing, China, 2Tsinghua University, Beijing, China
With the development of UAV technology, the use of UAVs as air base stations can quickly restore vehicle communication after disasters. In order to reduce the delay and maximize the rational use of bandwidth and power, this paper proposes a joint optimization allocation strategy for bandwidth and power. First of all, a deep learning network training is required, the reward mechanism is set by the change of delay, the purpose of training is to enable the UAV to select the optimal bandwidth allocation coefficient under the dynamic change of the environment, and propose a joint optimization selection strategy, set the signal-to-noise ratio threshold to ensure the communication quality, calculate the users transmission rate according to Shannons formula, finally select the delay minimization scheme as the final bandwidth and power allocation value. In the simulation experiment, compared with the previous traditional algorithm, the performance of the network is further improved.
UAV networking, wireless communication, resource allocation, deep learning.
Mikhail E. Belkin, Zhukov Leonid, and Alexander S. Sigov, MIREA - Russian Technological University, Moscow, Russian Federation
In the paper, an advanced concept for designing a smart rural communication network based on fiber-wireless architecture, millimeter-wave wireless distribution, and drone communication that is able to integrate in a 5G and beyond system, is proposed and discussed. The general motivation of the proposal is the intensive introduction of digitalization and the Internet of Things in the country in order to provide equivalent communication services to both urban and rural residents. Moreover, this approach is capable to ensure the transformation of the agricultural industrial sector, which is now relatively less affected by new Internet technologies, into a high-tech business due to the explosive growth of labour productivity and the reduction of non-productive charges. The specific objective of the proposed concept is to develop a versatile telecommunications network based on the emerging 5G NR fiber-wireless architecture widely used in the metropolis access subnets, and Internet of Agricultural Things (IoAT) technology.
5G and beyond wireless networks; Agriculture 4.0 era; Internet of agricultural things; Fiber-wireless architecture.
Adeethyia Shankar1, Stephanie Chang1, Yongzhong Zhao2, Xiaodi Wang1, and Tong Liu3, 1Department of Mathematics, Western Connecticut State University, Danbury, United States of America, 2Frontage Laboratories, Exton, United States of America and 3Tsinghua University, Beijing, China
The gut microbiome is composed of a plethora of microorganisms, and these microbes contribute to overall human health. It has been shown that dysbiosis of the microbiome is associated with certain diseases, including colorectal cancer and diabetes, yet the role of the microbiome is still little-known. Here, we aim to develop a novel wavelet-based framework to dissect the microbiome correlations of host traits. Due to the clinical nature of the biological dataset, we utilize the discrete wavelet transform (DWT)—enabling us to impute sparse matrices and decompose the data into different frequency components. We further carry out regressions of host traits with the microbiome relative abundances followed by computing correlations between the regression-predicted trait values. Moreover, we visualize these microbiome correlations of host traits with heat maps and build microbiome correlations of host traits network. As a result, our results revealed that microbiome correlations of host traits are prevalent. Our wavelet-based microbiome correlations of host traits analytic framework aims to lay the foundation for further causality analysis of the complex interplays between the microbiome and host traits.
Microbiome, Discrete Wavelet Transform, Host Trait Correlation Network, Regression, Sparse Matrix, Imputation.
Ftoon Kedwan, University of Prince Mugrin, Medina
This is a case scenario report reporting the successful Bed Management System (BMS) implementation project in Prince Sultan Medical Military City (PSMMC) in Saudi Arabia. Several advantages and challenges of the BMS implementation are reported, and a few findings are documented. The aim of this short paper is to demonstrate an ideal example of a technical solution implementation in a healthcare environment, which in most cases fail due to many people-related reasons including lack of cooperation. Such failing reasons and their solutions are explored in further detail to benefit the readers and healthcare organizationswho are about to implement a similar project.
Health Informatics, Bed Management, Information Systems, Healthcare, Medical Project Implementation.
Ftoon Kedwan, PhD Assistant Professor, College of Computer and Cyber Sciences Head of Software Engineering Program University of Prince Mugrin, Medina
Background:It is always recommended to keep up with the latest available clinical technologies to ensure patients’ safety, satisfaction, and high-quality service delivery, especially with complex healthcare systems. Such a system is adopted by the National Guard Health Affairs (NGHA) hospital in Saudi Arabia that has rapid growth and expansion pace.Considering the organization’s size and many other branches around the kingdom, the ICD10AM system had to be integrated with the hospital’s internal system. Objective: This industrial experience report answers two main questions:what were the disadvantages of the legacy ICD9CM? and, what were the major key success or failure factors in the Electronic Medical Records (EMR)department? Method: The NGHA council committee recommended the adoption of the ICD10AM clinical coding system, where it is easily interfaced with the main information system, the QCPR, at all NGHA sites and branches. Results:As a result, The ICD10AM System had covered the shortage and defects of the legacy system. ICD10AM had benefited the NGHA in both clinical and administrative aspects.Conclusions: The implementation and integration solution discussed in this experience report combines incorporated businesses and technical services. It helped healthcare plans define their strategy, plan for proper implementation workflow, achieve readiness and conduct end to end testing and deployment of the ICD10AM code set.
Electronic Medical Records, Information Technology, Healthcare Information Systems, Clinical Project Implementation, ICD10AM, ICD9CM.
Ftoon Kedwan, Assistant Professor, College of Computer and Cyber Sciences Head of Software Engineering Program University of Prince Mugrin, Medina
Remote Patients’ Monitoring System is an essential feature of remote healthcare efficiency and durability. It assures adequate clinical supervision and care to admitted patients in a timely and effortlessly manner. The objective of this case study is to show the impact of a health informatics solution application through the implementation of a remote physiological monitoring system in a cardiac care center. It also discusses the following questions: how health informatics was employed in this case study? how did this case study help healthcare providers to a better performance? how can we decide if we need to implement such a system in other organizations? In addition, an overview of the Philips, IntelliVue Patients’ Monitoring System (PMS) will be demonstrated and justified. The problem statement will be introduced, then, the organizational contextual settings, summary of key facts, problem implications, and the actual solution tailored and achieved by KACC management is verified in an advance detail.
Monitoring System; Project Implementation; Health Informatics; Cardiac Center.
Yu-Ping Liao, Hong-Xin Wu, Wen-Hsiang Yeh, and Yi-Lin Cheng, Department of Electrical Engineering, Chung Yuan Christian University, Taoyuan City 320314, Taiwan (R.O.C.)
We hope to solve the problem of not missing the prime time for rescue when the elderly or patients with heart disease are sudden death or emergency event at home. Therefore, this work proposes an intelligent vital sign monitoring robot based on Robotic Operating System (ROS). The heart rate is measured and monitored through the millimeter wave module. At the same time, the infrared thermal imager and the cloud database are combined with image recognition to detect the temperature of a person’s head, and the measured head temperature and heart rate data are regularly uploaded in combination with blockchain technology to establish a complete vital signs database. When the robot detects an unexpected situation, it uses IFTTT service to send a Line message notification to notify the family or the rescue unit as soon as possible to avoid more unfortunate accidents.
Remote vital sign monitoring, millimeter wave radar, non-contact vital sign monitoring, IoT, YOLOv7, Robot Operating System, Autonomous Mobile Robot, thermal imager, Firebase.
Xiaohan Feng1 and Makoto Murakami2, 1Graduate School of Information Sciences and Arts, Toyo University, Kawagoe, Saitama, Japan and 2Dept. of Information Sciences and Arts, Toyo University, Kawagoe, Saitama, Japan
The Witch is a typical stereotype-busting character because its description has changed many times in a long history. This paper is an attempt to understand the visual interpretations and character positioning of the Watch by many creators in different eras, AI is being used to help summarize current stereotypes in witch design, and to propose a way to subvert the Witch stereotype in current popular culture. This study aims to understand the visual interpretations of witches and character positioning by many creators in different eras, and to subvert the stereotype of witches in current popular culture. This study provides material for future research on character design stereotypes, and an attempt is proposed to use artificial intelligence to break the stereotypes in design and is being documented as an experiment in how to subvert current stereotypes from various periods in history. The method begins by using AI to compile stereotypical images of contemporary witches. Then, the two major components of the stereotype, "accessories" and "appearance," are analyzed from historical and social perspectives and attributed to the reasons for the formation and transformation of the Witch image. These past stereotypes are designed using the design approach of "extraction" "retention" and "conversion.", and finally the advantages and disadvantages of this approach are summarized from a practical perspective. Research has shown that it is feasible to use AI to summarize the design elements and use them as clues to trace history. This is especially true for characters such as the Witch, who have undergone many historical transitions. The more changes there are, the more elements can be gathered, and the advantage of this method increases. Stereotypes change over time, and even when the current stereotype has become history, this method is still effective for newly created stereotypes.
China and ASEAN, Stock market, Share index volatility network, Complex networks.
Asma Mansour, Ghazi Blaiech, Mahjoub Marouane, Asma Ben Abdallah, MohamHedi Bedoui, Laboratory of Technology and Medical Imaging, Faculty of Medicine, University of Monastir, Tunisia
Coronary artery segmentation is a crucial step in computer aided diagnosis of various coronary artery diseases. Thus, X-ray Coronary angiograms is used to detect and locate the narrowing zone and make the diagnosis of the artery stenosis. Recently, Deep Learning (DL)-based automatic segmentation has outperformed traditional methods. In the present paper, we are interested in three CNN models: U-Net, ResNet, and DenseNet with a reduced number of parameters and shorter execution time. Furthermore, we created a private MS data set and applied data augmentation techniques to improve segmentation performance for coronary images. A comparative study based on evaluation metrics: Dice, Sensitivity, Precision, and PR-Curve was performed. Promising results are obtained and discussed according to above performance criteria. This work has opened horizons to other research in the field of blood vessel segmentation by deep learning architectures.
Coronary artery segmentation, deep learning, CNN models.
Abel Varghese1, Mahendher Marri2 and Sibi Chacko3, 1,2,3School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh EH14 4AS, UK
In this paper, a study of autonomous vehicles in MATLAB/Simulink® 2022 is carried out and is conducted using vehicles with three different speeds 40, 80, and 120 km/hr. A normal highway in UAE is considered for road modelling. All vehicles modelled are representative of that available in the UAE. In the model, lane following and lane-keeping assistance functions and Simulink block which are described using artificial neural networks are selected. Simulation is validated with existing published results of physical vehicle models. In the simulations, it is assumed that vehicles have minimal steering angles as the system is in an autonomous collision free environment, selected from MATLAB. Results are obtained as velocities, accelerations, and safe distance with respect to the preceding vehicle. The following results are critically analysed and validated.
Artificial Intelligence, Artificial Neural Networks, MATLAB, Autonomous Vehicle, Deep Neural Networks, Ackermann.
Elmira Vafay Eslahi1 and Amirali Baniasadi2, 1,2Department of Electrical and Computer Engineering, University of Victoria, Victoria, Canada
Magnetic resonance imaging (MRI) is one of the best imaging techniques that produce high-quality images of objects. The long scan time is one of the biggest challenges in MRI acquisitions. To address this challenge, many researchers have aimed at finding methods to speed up the process. Faster MRI can reduce patient discomfort and motion artifacts. Many reconstruction methods are used in this matter, like deep learning-based MRI reconstruction, parallel MRI, and compressive sensing. Among these techniques, the convolutional neural network (CNN) generates high-quality images with faster scan and reconstruction procedures compared to the other techniques. In this study, we propose a new deep learning algorithm for MRI reconstructions. The Inception module proposed by Google inspires this algorithm . In other words, we introduce a new MRI U-Net modification by using the Inception module. Our method is more flexible and robust compared to the standard U-Net.
Magnetic Resonance Imaging, Convolutional Neural Network, Fast Fourier Transform, Inception Module, U-Net, Deep Learning, Machine Learning, Low Frequency, Mean Square Error, Structural Similarity Index Measure & Peak Signal-to-Noise Ratio.
Ishraga Mustafa Awad Allam, Information Technology & Network Administration, University of Khartoum, Khartoum City, Khartoum, Sudan
Big integers are very essential in many applications. Cryptography is one of these applications. There is an introduction to multiple byte fractions which can do good for accountants. In this study, the objective is to create a multiple byte integer type, with its arithmetic operations defined. The operations are: addition, subtraction, multiplication, division and modular exponentiation are overloaded, to work on this multiple byte integer type. The creation of the multiple byte integer is done by using doubly linked lists, a well known technique in data structure. The reason is that doubly linked lists enable us to create integer of unlimited size. That is, you do not have to pre-specify the size of the arrays storing these integers. This is done by dynamically allocating the memory to store the digits constructing the integers. The operations on these integers are defined using the simple and straight forward techniques, learnt in school. The results obtained are satisfactory and reliable. The type could be extended to help define multiple-byte floating point numbers. In this work, an improvement has been made to the work of BH Flowers.
Big Integers, Big Data, Finer Fractions, Object-Oriented Programming in C++, Multiple-byte Integers, Multiple-byte Fractions.
Raed H. Allawi1,2, Ghassan H. Abdul-Majeed3, 1Thi-Qar Oil Company, Thi-Qar, Iraq, 2Al-Ayen University, Thi-Qar, Iraq, 3 University of Baghdad, Baghdad, Iraq
Drilling operations face several drilling problems due to incorrect pore pressure prediction (Pp). The most prominent problems are lost circulation and kicks, which may lead to blowouts. Therefore, it causes an additional cost or well loss, equipment, and human. This study aims to predict Pp using artificial neural network (ANN). The ANN model is developed based on (77) measured points of Pp using the modular formation dynamics tester, gathered from five oil fields. Three variables were chosen as inputs to the ANN, namely: bulk density, vertical depth, and acoustic compressional wave velocity, with tangent sigmoid as the activation function. The statistical results were obtained using a single hidden layer with 8 neurons (i.e., the optimum ANN structure was 3-8-1). To simplify the calculation of Pp, a mathematical model was derived to represent the developed ANN. Furthermore, a sensitivity analysis was implemented to show the impact of each input variable on Pp prediction. For more validation, the proposed ANN was checked against 84 measured data points, not used in the development of the ANN. All the existing empirical equations were included in the validation, for comparison purpose.The results revealed that the proposed ANN performed best and clearly outperformed all the existing equations.
Pore pressure, Artificial Neural Network, Modular Dynamics Tester, Tangent sigmoid, Compressional wave velocity.
Dr. Vinod Sharma and Prof. Chandrika Rajput, Department of Computer Science Engineering & Application, SCE, India
Reinforcement learning differs from supervised learning in a way that in supervised learning the training data has the answer key with it so the model is trained with the correct answer itself whereas in reinforcement learning, there is no answer but the reinforcement agent decides what to do to perform the given task. In the absence of a training dataset, it is bound to learn from its experience. Reinforcement learning is an area of Machine Learning. It is about taking suitable action to maximize reward in a particular situation. It is employed by various software and machines to find the best possible behavior or path it should take in a specific situation. Machine learning methodologies analysis is helpful to understand of exactly meaningful and justify uses and behavior of technology like a human brain.
Machine Learning, optimization, recognition, Training Data, clustering, association, Approaches, methods etc.
Mehrdad Saffarie and Kheirollah Rahseparfard, Department of Computer Engineering, Qom University, Qom, Iran.
During the past decade, there is a mentionable transformation in all segments of the power industry worldwide, from generation to supply. This transformation affects among others domains related to regulatory, technological and market structures. These domains adopted ambitious policy objectives aimed at improving the competitiveness, security and sustainability of energy system. The change of power sector is also guided by the growing penetration of renewable and Distributed Energy Resources (DER), as well as the increasing involvement of electricity consumers in the production and management of electricity, which in turn are expected to radically change the local electricity industry and markets, especially at distribution level, creating opportunities but also posing challenges to the reliability and efficiency of system operation.The trend is inline to the smart-grid concept, which represents an unprecedented opportunity to move the energy industry into a new era of reliability, availability, and efficiency that will contribute to our economic and environmental health. During the transition period, it is critical to carry out testing, technology improvements, consumer education, development of standards and regulations, and information sharing between projects to ensure that the benefits we envision from the smart grid become a reality.
Internet of things(IoT), smart-grid, applications, integration of IoT and SG
Vijay Prakash, Shashank Saxena, Aditya Tripathi, Arshad Ali, iNurture-TMU, India
Many frequent sequential traversal pattern mining algorithms have been developed which mine the set of frequent subsequences traversal pattern satisfying a minimum support constraint in a session database. However, previous frequent sequential traversal pattern mining algorithms give equal weightage to sequential traversal patterns while the pages in sequential traversal patterns have different importance and have different weightage. Another main problem in most of the frequent sequential traversal pattern mining algorithms is that they produce a large number of sequential traversal patterns when a minimum support is lowered and they do not provide alternative ways to adjust the number of sequential traversal patterns other than increasing the minimum support. The proposed work aims to solve data sparsity problem in the recommendation system. It handles two-level pre-processing techniques to reduce the data size at the item level. Additional resources like items genre, tag, and time are added to learn and analyse the behaviour of the user preferences in-depth. To enhance the performance, the proposed method utilized Apache’s spark Mllib FP-Growth and association rule mining approach in a distributed environment. To reduce the computation cost of constructing tree in FP-Growth, the candidate data set is stored in matrix form.
Sequential traversal pattern mining, Weight constraint, Data mining, Hidden Behavioral analysis, Big data, Fp-Growth, Association rule mining, Two-level clustering
Abhishek Shinde, Prajyot Bhoir, Sahil Shinde and Ms.Bushra Shaikh, Department of Information Technology, SIES Graduate School of Technology , Navi Mumbai, India
It is now more important than ever to take action to lessen the negative consequences of private vehicles. If successfully implemented, mass transit is the ideal option, however because of its lack of door-to-door service, lengthier fixed routes, and unreliable timetable, many people do not appreciate it. Therefore, new facilities or services should be created to offer users a comfortable and dependable service and to lessen potentially dangerous environmental effects like pollution, congestion, etc. One of the cutting-edge technologies that is being used all over the world is ride sharing, in which users who have the same origin-destination and journey time are matched and share the transport.
Luiza Nacshon1 and Anna Sandler2, 1Senior Security Engineer, Red Hat, Israel, 2Software Engineer, Red Hat, Washington D.C
The goal of this research is to explore the security aspects of the hybrid Cloud Channel API world in greater depth and develop a rapid penetration testing tool that will help security researchers test Cloud Channel API security more effectively. The research proposes an innovative proxy-based solution for a rapid reactive test implementing a dynamic defense for channel API in the hybrid cloud.
The proxy-based solution executes security testing rules against the channel API requests and validates weaknesses or vulnerabilities as a dynamic defense. Malicious or vulnerable requests may be denied/dropped/alerted, and the results and decisions will be reflected in the API- management dashboard. In the scope of the paper, we focus on known API attacks and in future work, we are going to have a machine learning algorithm for unknown and new channel API attacks.
openshift, channel API, security, hybrid cloud, penetration test.
Tony T. Lee, Bojun Lu and Hanli Chu, The Chinese University of Hong Kong, China
In this paper, we propose a depth-first search (DFS) algorithm for finding maximum matching in general graphs. Unlike blossom shrinking algorithms, which store all possible alternative alternating-paths in the super-vertices shrinking from blossoms, the newly proposed algorithm does not involve blossom shrinking. The basic idea is to deflect the alternating path when facing blossoms. The algorithm maintains detour information in an auxiliary stack to minimize the redundant data structures. A benefit of our technique is to avoid spending the time on shrinking and expanding blossoms. This DFS algorithm can determine a maximum matching of a general graph with m edges and n vertices in 𝑂(𝑚𝑛) time with space complexity 𝑂(𝑛).
Maximum Matching, Augmenting Path, Blossom, Trunk, Sprout