Motor Control and Adaptive Skill Learning
Introduction
Motor control involves interaction between several brain parts. The significant dimension of the motor pallium is committed to the thumb, fingers, mouth, and lips, as they are crucial for operating gears and speech diction. Additionally, touch is a significant aspect in accomplishing the diversity of types of motor skills and motor control measures. The touchable response is vital in motor control for conditioning movement-linked aspects such as movement efficiency, movement coherence, and force change for continuing motions. The task of the brain adjusts promptly depending on movement states. The research of motor learning and controls plays an essential role in both the efficiency and overhaul of skills. New motor motifs are accomplished via motions and interplays with rich, sensual surroundings. The paper discusses the summary and critiques of peer-reviewed articles related to motor control and skill learning.
Summary
Motor learning/skill learning involves an all-around of developments which ranges from subordinate procedures for enduring alignments of our motions to effecting aristocratic perceivable opinions about how to step in a peculiar status. Skill learning is classified into the formation of a movement goal, action to achieve that objective, choice of the timely move to accomplish that objective, and completion of the preferred activity. The aspect of skill learning is about bearing more compelling evolution. In addition to boosting skills, skill learning also entails structures for enduring logical achievement in fluctuating surroundings. The human body is ever-adjusting and liable to tiredness, growth, and fracture (Krakauer et al., 2019). Thus, the motor commands don’t consistently cause the same motion results. Adapting to any continuing adjustments to cultivate a previously seized level of achievement is a significant role of skill learning. Skill learning depends on procedural memory systems rather than declarative memory systems. Also, skill learning emphasizes implicit learning using paradigms such as adaptation tasks.
Adaptation in skill learning entails a certain category of behavioral adjustments that entails changing how a previously well-adept deal is performed to preserve performance in reaction to an adjustment in the body. The author states that the urgency to protect our skills in a dynamic environment is apparently so prevalent that the motor system retains devoted structures for modifying our actions (Krakauer et al., 2019). However, adaptation can strive for a longer-term dominant demeanor under certain conditions. There is a need to review the abstraction of learning to offer acumens into the neural depictions underlying learning. Skills learning allows the result of new motor skills. It usually entails boosting the precision and certainty of movements. Moreso, skill learning adjusts the bodily designs of the brain. By appealing neurons in the cerebrum, additional sensory pathways are created, which allows more impetus to move (Krakauer et al., 2019).
Interpretation and Application
The refinement in the nervous system is committed to generating motions. Motor control enables us to move, allocate diverse objects, balance, and breathe. The central nervous system brings outcomes of an elegant interplay between basic computations within the distinctive circuits and neural info (Grillner & El Manira, 2020). The brain determines the overall objective of a movement before initiating it in an interactive environment. The brain selects which circuits to stimulate and then enforces the motions with a definite schedule, speed, and designation. The basal ganglia manage the distinctive command hubs in the medulla oblongata, which consecutively stimulate distinctive command lines to definite CPGs in the spinal cord. The cerebellum plays a significant function in the acceptable regulations of motions and motor learning. Cerebellum also plays a part in emotions and how someone makes decisions. An individual is born with limited motor control. As a person grows, there is a liberal evolution of the motor system, which help babies raise themselves and have balance. Standing up and maintaining balance requires a commitment from many distinctive parts of the brain. Diverse circuits are linked in an aggressive pattern to empower the structure with an array of motor behaviors (Grillner & El Manira, 2020).
Neuroscientists tend to figure out how the brain masters and produces a fascinating range of intricate acts. People demonstrate a much wider variety of skilled motions. Learning is always linked with adjustment in the brain’s refinement. Skill learning leads to advances in the exploit choice from cortical to subcortical circuits at the systems level (Papale & Hooks, 2018). The skill learning alters the rigor and efficiency gained to outright the effort, the intricacy of the ready motion order. Thus, the skills engage definite sensory procedures for learning and control. Motor skill learning is significant as it can be stored in a reserved incentive–feedback array that pursues the controlled command of the primary motor pallium. In the course of skill learning, the leading framework help in producing an evaluation of the body’s condition during motion, nevertheless at a fast span than the unification of aural info (Papale & Hooks, 2018).
Conclusion
The expertise in motor control and skill learning shape our comprehension of how people gain from beginner to skilled motor achievement all through the years. The control of human movement has been alleged in many distinctive frameworks of motor control. Motifs of motions self-organize within the essences of environmental circumstances and the actual body systems of a person. Additionally, the brain prefers muscle activation patterns that are advantageous. It favors muscles that can efficiently contribute to the positive mechanical work done. The disastrous impacts of the brain failing its capacity to administer body motions are observed in motor neuron illness. Moreso, skill learning of useful tasks is treated to be outright when a person can handily execute the function in a diversity of environments and can execute the task over various meetings.
References
Grillner, S., & El Manira, A. (2020). Current principles of motor control, with special reference to vertebrate locomotion. Physiological Reviews, 100(1), 271-
320. https://doi.org/10.1152/physrev.00015.2019
Krakauer, J. W., Hadjiosif, A. M., Xu, J., Wong, A. L., & Haith, A. M. (2019). Motor learning. Comprehensive Physiology, 613-
663. https://doi.org/10.1002/cphy.c170043
Papale, A. E., & Hooks, B. M. (2018). Circuit changes in motor cortex during motor skill learning. Neuroscience, 368, 283-
297. https://doi.org/10.1016/j.neuroscience.2017.09.010
Quantum mechanics and the nature of reality
Introduction:
Exploring the relationship between modern physics and Eastern mysticism has interested scholars and scientists alike. Fritjof Capra's book "The Tao of Physics" delves into this relationship and demonstrates the parallels between modern physics and the teachings of Taoism and Buddhism. This topic is important as it sheds light on the interconnectedness of seemingly disparate fields and challenges the conventional dichotomy between science and spirituality. In the context of contemporary business studies, this issue has become increasingly relevant as companies strive to integrate diverse perspectives and promote innovation. This essay will focus on eight key relationships and comparisons between modern physics and Eastern philosophies. It will synthesize the academic and practitioner literature to identify best practices for managing this contemporary issue in business. By clarifying the research boundaries and setting out the significance of this topic, this essay aims to provide a comprehensive analysis of the parallels between modern physics and Eastern mysticism and their implications for contemporary business.
Quantum mechanics and the nature of reality:
Quantum mechanics involves the concept that the observer's viewpoint influences experiment outcomes. The terms' observer effect and measurement problem are widely used to describe how measurements affect what's being observed. To put it differently, just following a subatomic particle can alter its behavior. In the double-slit experiment, it becomes evident that a particle can demonstrate characteristics of both waves and particles. The observation factor determines the nature of this behavior. We can understand the nature of reality and the role of consciousness in the universe better by examining these profound implications.
In Taoism and Buddhism, understanding reality is incomplete without the observer's perspective. The idea of Yin and Yang in Taoism highlights how everything is connected and dependent on each other. Yin-Yang is a representation of balance that describes how two seemingly opposite or contrary forces may be complementary. This symbol embodies the idea of interconnectedness and continuous change within the universe.
The notion of dependent origination in Buddhism similarly emphasizes how everything is interconnected. It is claimed that nothing exists in isolation, and every entity depends upon other contributing factors. The perception of reality depends on an individual's experiences and consciousness.
Quantum mechanics and Eastern mysticism share many parallels that emphasize how limited our current understanding of the universe truly is. Despite making significant progress in explaining the behavior of subatomic particles, modern physics can still not provide a complete picture of reality. Eastern mysticism similarly acknowledges that the ultimate nature of reality transcends our present understanding and can solely be experienced through direct realization.
Wave-particle duality and the concept of emptiness:
Wave-particle duality is a vital principle in quantum mechanics that elucidates how subatomic entities such as electrons and photons function. How these particles are observed or measured can cause them to display wave and particle behaviors. Electrons fired at a double-slit exhibit wave-like interference patterns, for example. Yet, when we determine their position, they exhibit discrete particle properties.
The basic nature of reality is explained in Buddhism through the sonata or emptiness concept. All things, including ourselves, lack inherent existence and are interdependent and in constant flux. This implies that all items in the cosmos are interrelated and emerge based on other conditions. The idea of dependent origination in Buddhism explains how everything arises due to specific causes and conditions.
Wave-particle duality resembles the concept of emptiness regarding how particles shift in quantum mechanics, where change is an inherent characteristic. All phenomena are devoid of intrinsic existence and depend on other factors, just as emptiness reveals. This indicates that the essence of reality isn't stable but rather in continuous transition and evolution.
In The Tao of Physics, Frito Capra compares wave-particle duality and the Buddhist notion of emptiness. Capra (2010, p. 122) explains that the emptiness of subatomic particles is a dynamic state and just a representation of the quantum wave function. Particles' dynamic nature is reflected in the constantly changing quantum wave function, which describes their probability of being found in a certain state.
Also, the Buddhist belief in emptiness does not hold a nihilistic opinion about reality. It rejects the idea of inherent existence in phenomena rather than denying their actual presence. Emptiness philosophy has that every entity requires external factors to be considered existent. Particle existence in quantum mechanics relies on observation or measurement to determine their behavior.
Non-locality and interconnectedness:
Non-locality, a concept in quantum mechanics, describes the phenomenon of particle entanglement. The connection formed between two entangled particles is not bound by the constraints of space and time. This implies that any modification in one particle will immediately impact the other, irrespective of the gap between them. Einstein coined spooky action at a distance to illustrate how particles in the quantum world are interconnected.
The fundamental idea behind Taoist and Buddhist philosophies is interconnectedness. In Taoism, Yin and Yang represent how everything is interconnected and interdependent. In Chinese philosophy, the Yin-Yang emblem signifies the balance between contrary elements, such as brightness vs. darkness or warmth vs. coldness, and male-female principles. The symbol represents the concept of universal connectivity and constant change.
Buddhism also emphasizes the interconnectedness of everything through dependent origination. The claim states that nothing can exist without being dependent on other factors. The perception of reality from an observer’s standpoint relies on their personal experiences and level of awareness.
The parallels between non-locality found within quantum physics and the concept of interconnectedness prevalent within Eastern mysticism are discussed by Frito Capra in his book titled The Tao Of Physics. Capras asserts that this feature within Quantum Mechanics makes it similar to some aspects of the Eastern spiritual tradition (Capra, 2010,p .102). His assertion suggests that there exists a resemblance between particle interconnectedness in quantum physics and overall connectedness according to Eastern mystical beliefs.
Furthermore, Capra points out how crucial non-locality is in comprehending the actual nature of reality. Capra (2010, p. 104) asserts that we have gained new insight into matter through quantum physics, highlighting its non-locality and interconnectedness with the universe. This statement reflects a belief in universal interconnectedness where everything relies on each other and undergoes a continuous transformation.
The role of consciousness:
Eastern mysticism and modern physics are compared regarding consciousness's significant role. The role of an observer's consciousness is essential for determining experiment outcomes in quantum mechanics. Quantum mechanics' measurement problem indicates that observing a subatomic particle can modify its behavior. The observer's awareness impacts the particles' behavior and ultimately determines how experiments conclude.
Meditation and mindfulness are core practices for developing consciousness in Buddhism and Taoism. A deeper understanding of reality is possible by creating greater awareness of the present moment and recognizing how everything is interconnected.
The practice of mindfulness meditation in Buddhism involves staying present without judging anything. Practicing mindfulness helps individuals become more conscious of their thoughts, emotions, and physical sensations, enhancing frightfulness and sagacity. By practicing mindfulness, Buddhist teachings suggest that individuals can gain a deeper understanding of reality. In addition, it can assist them in grasping the interconnected nature of everything.
Taoists consider meditation to be an indispensable aspect of cultivating consciousness. In Taoism, meditation involves concentrating on breathing while clearing the mind of all disturbances. By practicing this, people may develop a deeper understanding of their inner nature and recognize the interconnectedness of all things. According to Taoist teachings, individuals can connect with the universal energy of Tao by focusing on developing their awareness.
Frito Capra's The Tao of Physics delves into the connection between consciousness, modern physics, and Eastern mysticism. Capra (2010, p. 133) claims an intimate and profound relationship exists between consciousness and the physical world. The statement proposes that consciousness does not only observe reality but also takes part in creating it actively.
Moreover, according to Capra's assertion, individuals can attain a more comprehensive perception of reality by regularly practicing meditation and mindfulness. As stated by Capra (2010), meditation serves not only as a means of relaxation and stress reduction but also as a pathway to accessing profound states of awareness. Developing awareness can result in a deeper comprehension of how everything is interrelated and the basic essence of existence. Also, this showcases the thought.
Influence on Western scientists:
In the field of quantum mechanics, Western scientists have been notably impacted by the ideas of Taoism and Buddhism. Erwin Schrödinger—a pioneer in quantum physics—was notably influenced by Eastern philosophies.
The ancient Indian texts called Upanishads greatly influenced Schrödinger's interest in Eastern philosophy from an early age. These texts share similar themes with both Buddhism and Taoism. To him, an intense connection existed between his studies on quantum mechanics and The Upanishads' ideologies.
Also, Schrödinger's realization of the interconnected nature of reality eventually paved the way for his renowned wave equation development in quantum mechanics. The wave function contains information about all possible states and their probabilities for a quantum system described by the equation.
What Is Life?, a book by Schrödinger, explores how Eastern philosophy relates to contemporary physics. It suggests that in Eastern thought; there exists no dichotomy among observers vs. what they observe; subjects vs. objects; and those who know vs. what they are aware of(Schrödinger'44,p.32). The viewpoint expressed is consistent with the non-dualistic philosophy of Taoism and Buddhism.
Schrödinger's inquiry into Eastern philosophy and quantum mechanics deeply impacted Western science and philosophy. The concepts he introduced were instrumental in molding quantum mechanics and sparked more studies into the nature of consciousness and existence.
The Limits of Language and Conceptualization
Limitations in language and concept are essential factors that connect modern physics with Eastern mysticism. To describe particle and force behavior, physicists use mathematical equations. Comprehending these equations fully requires a high level of mathematical understanding as they defy simple interpretation or visualization.
Just like in Taoism or Buddhism, one accepts the limits of language while describing reality using paradoxes or metaphors pointing towards deeper truths. The cause is that reality's underlying nature exists beyond our capacity for linguistic comprehension and mental depiction, resulting in an inability to depict it accurately.
Frito Capra's exploration into the boundaries of language and conceptualization reveals similarities between modern physics and Eastern mysticism in his book The Tao of Physics. As Capra (2010, p. 142) explained, the quantum world's paradoxical nature reflects our conceptual framework's limitations. This conveys that our skillet limits our prevailing
knowledge about the universe to comprehend and portray it utilizing speech and mathematical expressions.
Moreover, Capra contends that direct experience can surmount the constraints of language and conceptualization. Capra (2010, p. 143) argues that in both physics and Eastern mysticism, it is accepted that one can go beyond the limits imposed by language and conceptualization only through direct experience. This idea implies that reality can only be experienced firsthand and cannot be completely expressed using words or concepts.
To illustrate truth beyond linguistic limitations, Taoism and Buddhism use paradoxes and metaphors when describing reality. In Taoism, Wu Wei, or non-action, is frequently compared to a river using an analogy. The river's effortless flow contains incredible power and influence despite not exerting force. This metaphor suggests genuine passion comes from effortless action instead of forceful control.
Non-Dualism
Comparing modern physics and Eastern mysticism, non-dualism is a fundamental area. Particles in quantum mechanics can exist as particles and waves, showing that opposites can live together. The statement indicates that the dualistic view of the world, which divides things into two distinct categories, is constrained and imperfect.
In Taoism, along with Buddhism, there is a rejection of polarizing thoughts, including those dividing things up into good vs. bad or even self vs. others, with these being seen merely as arbitrary distinctions that hinder comprehension. Also, they contend that these theories hamper our aptitude to obtain enlightenment. Such concepts engender a wrong idea of separation among fundamentally interconnected and mutually dependent things.
The non-dualistic viewpoint on reality is demonstrated in Taoism through the concept of Yin and Yang. Yin and Yang represent opposing forces like light vs. dark, hot vs. cold, and male vs. female. Although they are not independent entities, two aspects of a more significant whole are continually changing and equalizing each other.
The idea of emptiness in Buddhism represents a non-dualistic perspective on reality. Emptiness proposes that all things lack inherent existence and are instead dependent on other factors, continuously shifting. That implies no basic division among things, and everything is essentially linked.
In The Tao of Physics, Frito Capra examines how modern physics and Eastern mysticism reject a dualistic approach to understanding reality. As he points out (Capra, 2010, p.178), physics and mysticism provide us with an understanding of the world that stresses its inherent interdependence rather than its separation. This infers that the polarized approach to truth, where matters are defined as one thing versus another, has its constraints and is not wholly inclusive.
In addition, Capra suggests that adopting a non-dualistic perspective can aid our comprehension of consciousness and the universe. Capra (2010, p. 180) argues that consciousness is an essential element of the universe rather than being produced by matter. The perspective of non-dualism proposes that there is no ultimate differentiation between the
observer and what they observe or between subject and object. This statement embodies this philosophy.
Conclusion
Capra notes the similarities between modern physics and Eastern mysticism, notably Taoism and Buddhism in The Tao of Physics. The similarities observed in these fields reveal a significant interconnectedness, pointing towards a shared foundation surpassing cultural and disciplinary constraints. The findings from this exploration enhance our knowledge about the universe and where we stand about it.
It is evident from these parallels how incorporating Eastern mysticism into business practices enhances our understanding of reality, particle's nature, and the role played by consciousness. The adoption of such sustainable and mindful business methods has the potential to benefit both individuals and the larger community.
Including diverse perspectives in our understanding of the world is an essential aspect that contributes significantly to knowledge and practice. To improve problem-solving, it's important to integrate various cultural and disciplinary perspectives in the future.
The organization and the wider community can benefit from these suggestions, promoting a more inclusive and sustainable approach to business. Other studies have compared and concluded that mindfulness and holistic approaches are crucial in business practices.
Further research could explore how integrating Eastern mysticism can be practically applied in business settings. Moreover, examining the incorporation of diverse cultural and disciplinary outlooks into addressing problems is also worth exploring.
Careful language must be used to avoid cultural appropriation and ensure sensitivity toward complex spiritual concepts.
Quantum Image Processing
INTRODUCTION
Image processing has become a popular and critical technology and field of study for our everyday lives. The need to extract important data from visual information arose in many fields like biomedicine, military, economics, industry, and entertainment [1]. Analysis and processing of images requires representing our 3D world in 2D spaces, using different complex algorithms to highlight and examine essential features [2]. With the rapid growth of the volume of visual information, these operations are requiring more computing power. According to Moore’s law, computing performance of classical computers doubles every 18 months. However, experts claim that this law will not hold true for very long [1]. Thus, classical computers will not be able to solve image processing problems with big sets of data within reasonable time limits.
Failure of the Moore’s law can be solved with quantum computation. Existence of more efficient quantum algorithms and their ability to perform calculations faster than classical computers was shown by researchers [1]. Quantum computing also can dramatically improve areas of image processing [2]. Applying quantum computation to image processing tasks is rereferred as Quantum Image Processing or QIMP. This paper will review the basics of quantum image processing and computation, go into its use, with focus on security technologies, and discuss the challenges and future of QIMP.
QUANTUM COMPUTATION
A new method of computation known as quantum computing could completely change the field of computer science. In 1982, the late Nobel Prize-winning physicist Richard Feynman began exploring the possibilities of using quantum systems for computing [1]. He was interested in modeling quantum systems on computers. He realized that the number of particles has an exponential effect on the amount of classical memory needed for a quantum system. Thus, when simulating 20 quantum particles, only 1 million values need to be stored, while when simulating 40 quantum particles, 1 trillion values need to be stored. It's impossible to do interesting simulations with 100 or 1000 particles, even with all the computers on Earth [2]. Thus, the concept of using quantum mechanical effects to perform calculations was developed when he proposed the creation of computers that used quantum particles as a computational resource that could model general quantum systems for mass simulation. Researchers have taken a closer look at the processing capability of quantum systems as a result of exponential storage capacity and some disturbing phenomena such as quantum entanglement [4]. Over the past 20 years, quantum computing has exploded, proving that it can solve some problems exponentially faster than any computer [3]. If quantum computers can be built massive enough, the best-known algorithm, Peter Shor's integer decomposition algorithm, will make it easier to break the most common encryption methods currently in use [1].
All modern mainstream computers fall under the category of classical computers, which operate on a "Von Neumann architecture," which is based on an abstraction of discrete chunks of information [1]. Since a computer must eventually be a physical device, scientists recently have moved away from this abstraction of computation and realized that the laws regulating computation should be derived from physical law. One of the most fundamental physical theories, quantum mechanics was a good candidate to investigate the physical feasibility of computational operations [5]. The important finding of this study is that quantum mechanics permits machines that are substantially more powerful than the Von Neumann abstraction.
Along with Shor's factoring algorithm, Lov Grover's search algorithm is a fantastic quantum technique that significantly lessens the amount of work required to look for a certain item. For instance, it takes an average of 500,000 operations on a classical computer to search through a million unsorted names for a given name, and the Von Neumann model of computing offers no faster method [1]. However, using Grover's approach, which takes use of quantum mechanics' parallelism, the name may be obtained with just 1,000 comparisons under the quantum model. Grover's approach outperforms the conventional one considerably more for longer lists.
The subject of quantum computing is huge and diverse today. There are researchers working on a variety of topics, from the creation of physical devices employing various technologies like trapped ions and quantum dots to those tackling challenging algorithmic problems and attempting to pinpoint the precise limits of quantum processing [5]. It has been established that quantum computers are inherently more powerful than classical ones, although it is still unclear how much more powerful they are. And a technological challenge is how to construct a large quantum computer [3].
So, quantum computation is still in its infancy. If the technical challenges are overcome, perhaps quantum computation will one day supersede all current computation techniques with a superior form of computation, just as decades of work have refined the classical computer from the bulky, slow vacuum-tube dinosaurs of the 1940s to the sleek, minimalist, fast transistorized computers that are now widely used. All of this is based on the peculiar laws and procedures of quantum physics, which are themselves anchored in the peculiarities of Nature. What computers will be derived from more complex physical theories like quantum field theory or superstring theory remains to be seen.
BACKGROUND
The field of quantum image processing aims to adapt traditional image processing techniques to the quantum computing environment. Its main focus is on using quantum computing technologies to record, modify, and recover quantum pictures in various formats and for various goals. It is believed that QIMP technologies would offer capabilities and performances that are yet unmatched by their traditional equivalents because of some of the astonishing aspects of quantum processing, including entanglement and parallelism. These enhancements could be in the form of increased computer speed, ensured security, reduced storage needs, etc [3].
The first published work connecting quantum mechanics to image processing was Vlasov's work from 1997. It concentrated on using a quantum system to distinguish orthogonal images. Then, efforts were made to look for certain patterns in binary images and identify the target's posture using quantum algorithms. In 2003 publication of Venegas-Andraca and Bose's Qubit Lattice description for quantum pictures greatly contributed to the research that gave rise to what is now known as QIMP. The Real Ket, which Lattorre developed as a follow-up representation, was designed to encode quantum pictures as a foundation for more QIMP applications [1][3].
The proposal of Flexible representation for quantum images by Le et al. genuinely sparked the research in the context of current descriptions of QIMP. This might be explained by the adaptable way in which it enables the integration of the quantum picture into a normalized state, which makes it easier for auxiliary transformations on the image's contents. Since the FRQI, a wide range of computational frameworks that focus on the spatial or chromatic content of the picture have also been presented, along with numerous alternative quantum image representations (QIRs).
The representative QIRs that can be linked back to the FRQI representation include the multi-channel representation for quantum images (MCQI) and novel enhanced quantum image representation (NEQR). The development of algorithms to alter the location and color information encoded using the FRQI and its several variations has also received a lot of attention in QIMP [5]. For instance, it was initially suggested to use FRQI-based fast geometric transformations, which include swapping, flipping, rotations, and restricted geometric transformations to limit these operations to a specific region of an image [3]. Recent discussions have focused on quantum image scaling and NEQR based quantum image translation, which transfer each picture element's position in an input image to a new position in an output image. While single qubit gates like the X, Z, and H gates were initially used to propose FRQI-based broad forms of color transformations. Later, MCQI-based channel of interest operator, which involves moving the preselected color channel's grayscale value, and channel swapping operator, which involves switching the grayscale values of two channels, were further studied [3].
Researchers always prefer to mimic the digital image processing jobs based on the QIRs that we already have in order to demonstrate the viability and competence of QIP methods and applications. Researchers have so far made contributions to quantum image feature extraction, quantum image segmentation, quantum image morphology, and quantum image comparison using the fundamental quantum gates and mentioned operations [5]. QIMP based security technologies in particular have drawn a lot of interest from researchers.
IV. SECURITY TECHNOLOGIES
The necessity for secure communication has developed along with mankind's need to transfer information. With the development of digital technology, the demand for secure communication has increased. In order to realize secure, effective, and cutting-edge technologies for cryptography and information concealment, QIMP is totally based on the extension of digital image processing to the quantum computing domain [3]. Indeed, quantum computation and QIMP offer the potential for secure communication in fields like encryption, steganography, and watermarking.
Encryption is the practice of hiding information to render it unintelligible to those lacking specialized knowledges as a direct application of the science of cryptography. This is frequently done for confidential communications in order to maintain confidentiality. Information hiding focuses on hiding the existence of messages, whereas cryptography is concerned with safeguarding the content of messages. Since attackers cannot easily detect information hidden using techniques like steganography and watermarking, it appears to be safer [3]. The high requirements for the quantity of information that can be concealed under the cover image without changes to its perceived imperceptibility are one of its key limitations, though. Even though steganography and watermarking are similar, they have different goals and/or applications as well as different needs for those goals [3]:
In watermarking, the carrier image is the obvious content, but the copyright or ownership is concealed and subject to authentication. In the instance of steganography, it aims to safely transmit the secret message by disguising it as an insignificant component of the carrier image without raising any red flags with outside opponents.
Information is concealed through watermarking in the form of a stochastic serial number or an image, such a logo. As a result, watermarked photos typically contain some little copyright ownership information. Steganography frequently needs a huge carrying capacity in terms of the carrier picture because its goal is to conceal the presence of the concealed message.
When watermarking, the content can be subject to many sorts of infringements, such as cropping, filtering, channel noise, etc., whereas steganography pictures don’t face such issues.
FUTURE DIRECTIONS AND CONCLUSIONS
Research is concentrated on what can be accomplished with quantum technologies once increased realization has been achieved, beyond the continuing work toward the physical implementation of quantum computer hardware [3]. One of these is the nexus of quantum computation with image processing, which is known as quantum image processing. Researchers are confronting both enormous potential and problems to create more effective and usable services because it is a relatively new phenomenon.
All the experimental QIP protocol implementations that have taken place so far have been limited to using traditional PCs and MATLAB simulations built on linear algebra using complex vectors as quantum states and unitary matrices as unitary transforms [5]. These provide a fairly constrained implementation of the potential of quantum computation. Therefore, it is crucial to understand the function of quantum computing software needed to implement the various algorithms that we have in order for them to complement the hardware as researchers intensify their efforts to advance and expand QIP technology [3].
REFERENCES
Beach, G., Lomont, C., & Cohen, C. (2003, October). Quantum image processing (quip). In 32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings. (pp. 39-44). IEEE.
Anand, A., Lyu, M., Baweja, P. S., & Patil, V. (2022). Quantum Image Processing. arXiv preprint arXiv:2203.01831.
Yan, F., Iliyasu, A. M., & Le, P. Q. (2017). Quantum image processing: a review of advances in its security technologies. International Journal of Quantum Information, 15(03), 1730001.
Cai, Y., Lu, X., & Jiang, N. (2018). A survey on quantum image processing. Chinese Journal of Electronics, 27(4), 718-727.
Ruan, Y., Xue, X., & Shen, Y. (2021). Quantum image processing: opportunities and challenges. Mathematical Problems in Engineering, 2021.
Peli, T., & Malah, D. (1982). A study of edge detection algorithms. Computer graphics and image processing, 20(1), 1-21
Information Science and Technology
↳ Modern Technology
Numerical Analysis
Computer Software and Modern Applications.
Introduction
In this article, we are going to discuss how computer software has helped students and users in numerical data analysis with practical issues like languages used in programming. In addition, the interaction between numerical computation and symbolic computation will also be reviewed.
Models used in numerical analysis
Mathematical modeling and numerical analysis have been very important in current life affairs. Pragmatic numerical analysis software has been integrated into most software packages, such as programs in the spreadsheet, enabling people to perform mathematical modelling without prior knowledge of the processes involved. This, therefore, demands the installation of numerical analysis software which can be relied on by the analysts. Designing of Problem solving environments (PSE) enables us to solve and model many situations. Interface for graphical users has made PSE for modeling a given situation easy with good models for mathematical theories. There have been modern applications of numerical analysis in computer software of late in many fields. For example, computer-aided manufacturing and computer-aided design in the engineering sector have led to improved PSEs to be developed for both CAD and CAM. Mathematical models in this field are based on the basic laws of newton on mechanics. Algebraic expressions and ordinary differential equations are mostly involved in mathematical models. Manipulation of the mixed systems of these models is very difficult but very important in the modeling of mechanical systems such as car simulators, plane simulators and other engine moving mechanicals needs real-time solving of differential-algebraic systems.
Atmospheric modeling is important in understanding the effects of human activities on the atmosphere. A great number of variables such as the velocity of the atmosphere in a given point at a given time, temperature, and pressure need to be calculated. In addition, chemicals in the atmosphere such as carbon dioxide, which is a pollutant, and their reactions need to be studied—studying velocity, pressure, and time which are defined with partial differential equations and the kinetic, chemical reactions which are defined using ordinary differential equations which very complex needs sophisticated software to handle. Businesses have incorporated the use of optimization methods in decision-making on efficient resource allocation. Locating manufacturing and storage facilities, inventory control problems and proper scheduling are some of the problems which require numerical analysis of optimization. (Brinkgreve, R. B. J. (1996).)
Numerical software sources
Fortran has remained the widely used programming language, and it keeps on being updated to meet the required standards, with Fortran 95 as the latest version. Other useful languages include C++, C, and java. In numerical data analysis, there are several numerical analysis software packages used. The following is a list of the packages used in data analysis with computer software. 1. Analytica software is a wide range of tools used in analyzing and generating numerical models. This is a language programmed visually and linked with influence diagrams. 2. FlexPro program is used in data analyzing and presenting measurement data. It has an excellent interface similar to Excel program with a vector programming language built in it. 3. GNU Octave-this is a high-end language used in the computation of numbers. It has a command- line interface that is used in numerically solving nonlinear and linear problems. Numerical experiments are solved with a language that is mostly suited with MATLAB. There are several newly developed programs of Linux such as cantor and KAlgebra, which offers Octave a GUI front ends. 4. Jacket is a MATLAB GPU tool that enables offloading of computations for MATLAB to the GPU to visualize data and acceleration purposes. 5. Pandas. It is a BSD- licensed python programming language tool used for the provision of data structures and data analysis. 6. Torch provides support for manipulation, analysis of statistics, and tensor presentation. 7. TK Solver is a commercialized tool by the universal technical system used in problem-solving and mathematical modeling software basically on rule-based and declarative language. 8. fit is a statistical analysis and curve-fitting plugin to excel. 9. GNU MCSim is a package for numerical integration and simulation with fast Markov chain Monte Carlo and Monte Carlo capabilities. 10. Sysquake is an application based on MATLAB-compatible language for computing environment with interactive graphics engineering, mathematics, and physics.( Conte, S. D., & De Boor, C. (2017).)
Software development tools
There have been efficient tools in the types of programming languages that creates computer solutions. The following are some of the basic qualities that a mathematical programming language should possess. First, a syntax that enables accurate and fast transformation from mathematical formulae into program statements should be possessed by the language. Also, the language should be rooted on primitives next similar to the basic concepts of mathematics. Lastly, tools for efficient and fast execution should be included in the language. The programming languages have been categorized into different generations: First Generation languages 1954-1958 (Fortran I, ALGOL 58, Flowmatic, and IPL V). Second-generation languages 1959-1961 (Fortran II, ALGOL 60, COBOL, and LISP). Third Generation Languages 1962-1970 (PL/1 (Fortran +COBOL+ALGOL), ALGOL 68, PASCAL, SIMULA, and APL. The Generation Gap 1970-1980. This had many languages which were different. (Bartholomew- Biggs, M. C. (2000).)
Software options for solving problems
There are three classes for software. (1) Compilers for language and graphic packages, which are basic tools. (2) Tools which can solve the users’ problems, such as systems in structuring engineering. (3) Widely applicable generic tools such as mathematical systems and compiler generators.
A course of several actions has to be undertaken for a numerical solution to be attained, which include: (1) using the existing black-box package. Those packages include PAFEC for elementary work and GENSTAT used in the analysis of statistics. (2) Application of library routines like IMSL, NETLIB, and NAG after splitting the problem in hand to components well defined. (3) Writing a whole purpose-built program sometimes which needs deep computing and analytical knowledge.
Numerical libraries
Design issues
Numerical libraries mainly perform normal numerical linear algebra operations, disintegration of singular value, and transformation of Fast Fourier, optimization of nonlinear problems, linear programming, curve fitting, quadrature, and performing special functions.
NAG
In May 1970, a group of six UK universities centers for computing decided to create a numerical routine library. A year later, they released Mark 1 of the NAG library, which contains 98 routines documented in it. In 1989 mark 12 had 688 routines and many other library versions created in Algol 60, Pascal, and Algol 68. In addition, there are specific library versions used by for computers from cray, CDC, Data General, Harris, Telefunken, Xerox, and Philips. The Philosophy of NAG of giving maximum efficiency indicates that the software is trying to excellently calculate mathematical problems within the algorithm domain solution. It also strives to signal and reject a condition erred and returning a best upper bound of the erred condition where possible to the user-supplied tolerance.
International mathematics and statistical libraries (IMSL)
This contains a big mathematical software library. Its main aim is achieving success commercially with the lowest cost and a resulting high volume. Over 350 subroutine-specific software types are readily compatible for computers from Data General, Xerox, DEC, Hewlett- Packard, and Burroughs. (Wang, J. Y., & Garbow, B. S. (1981).)
Tea pack
This is a Fortran-based subroutine library that is assumed to be easy to use than IMSL and NAG which are commercial libraries. Peapack was designed in the 1980s with a documents for teaching numerical analysis introduction. This package contains routines for most basics, polynomial roots, interpolation, and ordinary differential equation calculations.
Machine-Dependent Libraries
Supercomputers contain software libraries like floating point systems kind of machines. In 1989 July, Brad Carlile gave a report to the NA Digest, which informed that computing for FPS had announced "at-cost" FPSmath availability which was a standard library de facto software for scientific algorithms and engineering systems. This speeds the application research and development by allowing institutions to have the same mathematical tools across all their environment for computing a nominal cost hence guaranteeing portability and taking merits of supercomputer and features of acceleration.
Sources of documentation
Users are provided with various documentation categories, including condensed information, encyclopedic, detective, and specialized information.
Comparison and testing algorithm
The following factors affect the choice of which algorithm to choose: efficiency, storage costs, generality, and reliability of solving all problems.
General-purpose computer algebra systems.
A computer algebra system is a package for software used in mathematical formulae manipulation. The computer algebra system's main purpose is to automate tedious calculations and solve hard algebraic expressions. Computer algebraic systems' ability to solve equations symbolically makes the main difference between it and the traditional calculator. Computer Algebra systems provide the user with a programming language to define her procedures and graphing equations facilities. Some of the widely used systems include Mathematica, MathCAD, and maple. They are used in rational functions simplifications, factor polynomials, solving equations, and many other calculations. While Newton and Leibniz's algorithmic processes of solving calculus are very hard and tedious to solve, computer Algebra systems perform these tasks and outdo the man from the processes. The following is a summary of how these computer algebra systems work.
Speakeasy-this was developed in the 1960s with its main being manipulation of matrix, and within the process of evolution, it geared the most common paradigms tools with typing dynamically the structured data objects, collection of garbage and dynamic allocation, overloading operators, and connecting added modules by the groups of users.
Sampath-this is software that is an open-source containing a unified python interface. Examples are proprietary general and open source purposes CAS and other programs such as GP, Magma, GAP, and Maple.
PARI-matrices, algebraic numbers, and polynomials are computed in this computer algebra system.
Mathematica-this has computer algebra capabilities and programming languages that are designed for number theory computations.
Mathcad which has WYSIWYG interface which aid in mathematical equations publication quality.
Trivino - It is an open-source, object-oriented collection of libraries applied in engineering and science, and it solves linear and parallel algebra algorithms. (Wester, M. J. (1999).)
Computer-assisted data analysis
Computer-assisted software for qualitative data analysis like MAXQDA gives the solutions to given data problems without directly giving interpretations to the user. Qualitative data software tools give way to structuring, sorting, and analyzing big data, which facilitates evaluation and interpretation management. Quality data analysis depends on methods of organizing, systemizing, and analyzing non-numeric like those used in the analysis of qualitative content, analysis of mixed methods, group discussions, case and field studies, and Grounded theory. Computer-assisted data analysis packages should facilitate and support methods of sorting, analyzing, and structuring data content despite which approach the researcher chose. Data in the form of image files, video, audio material, and data from social media can also be included in these packages with sophisticated computer-assisted data analysis software authorizes for transcribing and importing the content to the program direct. Software for QDA, such as MAXQDA, provides support to the whole process of analyzing by providing overviews and relationships visualization. It also provides space for the addition of memos to the various analytical processes which aid the user in understanding them better. The first version of MAXQDA computer-assisted data analysis software was created in 1989, and this makes it a pioneer software program in the area. From collecting data to publishing the final report regardless of the approach used, the program provides support to the user. Coding or systematic assignment of portions of texts to themes and probability of making notes and associations are the central elements of MAXQDA. In MAXQDA, evaluation and interpretation of data are performed by sorting materials into portions using a system of hierarchical coding through variables defining, tabular overviews provision, and colors assigning to segments of text. The procedures can be easily tracked, and within a few steps, the results are easily accessed. Creating stunning visualizations helps the user to view the data from a completely different perspective and be able to test theories. The results can be projected and exported to many programs so as to be included in the final publication through these visualizations. (Chitu, C., & Song, H. (2019).)
Data analytics and processing platforms in Cyber-Physical Systems.
The speed of current developments in cyber physical systems and IoT leads to new challenges for business owners and data analysts to come up with new techniques for analyzing big data. Cyber-physical systems is the integration of systems of computer connected to the physical world. The systems process and display signals for the problem in the hand of the user.
Data types
The most important aspect to consider when in the journey of understanding data is to be able to distinguish the different types of data. Therefore, the implementation of a machine learning algorithm in order is very important. Variables are either numerical or categorical types. The categorical data is subdivided into two subcategories: nominal, which means there is no meaningful order, and ordinal type, which shows there is an obvious order. Numerical data is counts or measurements and is grouped into two other types: discrete or integers and continuous data. There are several types of data analysis using computer software as listed here: qualitative analysis, hierarchical analysis, graph analysis, spatial analysis, and textual data analysis.
Hybrid systems
This is a system where the interested area behavior is determined by coupling processes of distinct characteristics in specific discrete dynamics and coupling them continuously. They generate signals consisting of discrete-valued signals and continuous signals. The signals depend on variables that are independent such as time. Hybrid models are used in the control of automotive engines by solving control algorithms that are implemented through embedded controllers, which reduce gas consumption and pollutant emissions with neutral car performance.
B soft computing
This is an approach used in the separation of soft computing knowledge grounded on intelligence on computation from hard computing skills grounded on artificial intelligence computation. Hard computing has characteristics of formality and precision, and it is channeled toward analyzing and designing systems and physical processes. It handles crisp systems, probability theories, mathematical programming, binary logic, approximation theory, and differential equations. Soft computing is used in analyzing and designing intelligent systems, and it handles problems related to fuzzy logic, probabilistic reasoning, and neural networks.
Data structures.
For a program in a computer to manipulate an equation symbolically, the equation has first to be stored in certain computer memory. At the center of any computer algebra system, there is a single data structure or a combination of many data structures responsible for mathematical equation describing. Equations might have other functions references, or be rational functions, and sometimes exist in several variables. Hence, there is no certain specific solution of an equation to a structure of data presentation. A presentation can be Complex in space and time; hence it becomes inefficient, but it may be easy to program. An efficient presentation to a certain mathematical problem does not mean it is efficient to others; hence there is no specific answer to a given problem.
Interface-oriented software
COMSOL- This is a simulation and solver software for doing engineering and physics applications, specifically coupled phenomena.
Baudline- this is used for scientific visualization and numerical signal analysis, and it is a time-frequency browser.
Dataplot, which is provided by the NIST.
Euler Mathematical toolbox. It is a laboratory-powerful numerical programming language which can solve complex, interval, and real numbers, matrices, and vectors.
Hermes which is a library tool for C++ of improved finite algorithms for solving partial differential equations and problems coupled with multiphysics.
DADiSP. It is a DSP-focused program for combining MATLAB numerical capability with a spreadsheet interface.
Flexpro. It is a program used commercially for an automated and interactive presentation and analysis of measurement data. Other programs include IGOR Pro, FEniCS project, Fityk, and Labplot.
Language oriented software
ADMB. It is a C++ software that uses automatic differentiation to model nonlinear statistics.
AcsIX. It is an application software for evaluating and modeling the continuous system performance described by nonlinear differential equations and time-dependent.
AMPL is a language for mathematical modeling for solving and describing problems of high complexity for optimization on a large scale.
Armadillo which is a C++ program for linear algebra which has factorizations, decompositions, and functions of statistics.
APMonitor. It is a language for mathematical modeling used for solving and describing physical system representations inform of algebraic and differential equations.
Clojure. It contains numeric Neanderthal libraries, ClojureCL, and ClojureCUDA to handle linear algebraic functions and optimized matrix functions on GPU and CPU.
R is a system for data manipulation and statistical analysis in which the SAS language is implemented.
SAS is statistics software that includes a matrix programming language.
VisSim. It is a nonlinear dynamic simulation and a visual block-diagram program which supports fast ordinary differential equations with the simulation of real-time complex large scale models.
World Programming Systems (WPS), which supports python mixing SAS and R languages using a single-user program for analysis of statistics and evaluation of data. Other language-oriented software includes Julia, Madagascar, O-Matrix, Optim, GAUSS, pearl data language, and many others.
Conclusion
Due to the current trend in transforming the world in many aspects of life, computers have to be included in the numerical data analysis systems to ease faster and accurate data simulations. Industrialization increased business capital, future reverences demands, increasing populations, and other life affairs have led to the demand for computer-aided data analysis systems in the numerical analysis field, as discussed in this article.
References
Bartholomew-Biggs, M. C. (2000). Software to support numerical analysis teaching. International Journal of Mathematical Education in Science and Technology, 31(6), 857-867.
Brinkgreve, R. B. J. (1996). Geomaterial models and numerical analysis of softening. Chitu, C., & Song, H. (2019). Data analytics and processing platforms in CPS. In Big
Data Analytics for Cyber-Physical Systems (pp. 1-24). Elsevier. Conte, S. D., & De Boor, C. (2017). Elementary numerical analysis: an algorithmic approach. Society for Industrial and Applied Mathematics.
G Golub and C Van Loan. Matrix Computations, 3rd ed., John Hopkins University Press, 1996.
Wang, J. Y., & Garbow, B. S. (1981). Guidelines for using the AMDLIB, IMSL, and NAG mathematical software libraries at ANL (No. ANL-81-73). Argonne National Lab., IL (USA).
Wester, M. J. (1999). Computer algebra systems: a practical guide. John Wiley & Sons, Inc..
On Thermodynamic Technologies: A Short Paper on Heat Engines, Refrigerators, and Heat Pumps
Thermodynamic processes that occur spontaneously are all irreversible; that is, they proceed naturally in one direction but never reverse. A rolling wheel across a rough road converts mechanical energy into heat due to friction. The former is irreversible, just as it is impossible that a wheel at rest would spontaneously start moving and getting colder as it moves instead.
In this paper, the second law will be introduced by considering several thermodynamic devices: (1) heat engines, which are partly successful in converting heat into mechanical work, and (2) refrigerators and heat pumps, which are partly successful in transferring heat from cooler to hotter regions.
Heat Engines
The essence of our technological society is the ability to utilize energy resources other than muscle power. These energy resources come in many forms (e.g. solar, geothermal, wind, and hydroelectric). But even though we have a number of them available in the environment, most of the energy used for machinery comes from burning fossil fuels. This process yields heat, which then can be directly used for heating buildings in frigid climate, for cooking and pasteurization, and chemical processing. But to operate motors and machines, we need to transform heat into mechanical energy.
Any device that converts heat partly into mechanical energy or work is called a heat engine. They absorb heat from a source at a relatively high temperature, i.e. a hot reservoir (like combustion of fuel), perform mechanical work, and discard some heat at a lower temperature (Young & Freedman, 2019). In correspondence to the first law of thermodynamics, the initial and final internal energies of this system are equal when carried through a cyclic process, as in
Fig. 1 Schematic energy-fiow diagram for a heat engine
Thus, we can say that net heat flowing into the engine in a cyclic process is equal to the net work done by the engine (Brown et al., 2017).
We can illustrate how energy is transformed in a heat engine using the energy-flow diagram (Fig. 1). The engine itself is represented by the circle. The amount of heat QH supplied to the engine by the hot reservoir is directly proportional to the width of the incoming “pipeline” at the top of the diagram. The width of the outgoing pipeline at the bottom is proportional to the magnitude |QC| of the heat discarded in the exhaust. The branch arrow to the right represents the portion of the heat supplied that the engine converts to mechanical work, W.
When an engine repeats the same cycle over and over, QH and QC represent the quantities of heat absorbed and rejected by the engine during one cycle; QH is positive, and QC is negative. The net heat Q absorbed per cycle is
= + =||−||
The useful output of the engine is the net work W done by the working substance. From the first law,
= = + =||−||
Ideally, we would like to convert all the heat QH into work; in that case we would haven QH = W and QC = 0. Experience shows that this is impossible; there is always some heat wasted, and QC is never zero. We define the thermal efficiency of an engine, denoted by e as the quotient
The thermal efficiency e represents the fraction of QH that is converted to work. To put it another way, e is what you get divided by what you pay for. This is always less than unity, an all-too-familiar experience! In terms of the flow diagram of Fig. 1, the most efficient engine is one for which the branch pipeline representing the work output is as wide as possible and the exhaust pipeline representing the heat thrown away is as narrow as possible.
When we substitute the two expressions for W given by Eq. 1.2 into Eq. 1.3, we get the following equivalent expressions for e:
Fig. 2.1 Schematic energy-flow diagram for a refrigerator
Refrigerator and Heat Pump
We can understand the mechanism of a refrigerator as opposed to a heat engine. As explained in the first part, a heat engine takes heat from a hot reservoir and gives it off to a colder place. A refrigerator operates in reverse, i.e. it takes heat from a cold place (inside of the refrigerator) and gives off that heat into a warmer place, often the surrounding air in the room where the refrigerator is located. In addition, while a heat engine has a net output of mechanical work, the refrigerator requires a net input of mechanical work (Poredoš, 2021).
Fig 2.1 shows an energy-flow diagram for a refrigerator. From the first law of thermodynamics for a cyclic process,
+−=0 −=− or because both QH and W are negative,
||= +| |
It only shows that the heat |QH| given off from the working substance and given to the hot reservoir is always greater than the heat QC taken from the cold reservoir.
From an economic point of view, the most efficient refrigeration cycle is one that takes off the greatest amount of heat |QC| from inside the refrigerator for the least use of mechanical work, |W|. The relevant ratio is |QC|/|W|, called the coefficient of performance, K, which implies that the larger this ratio is, the better the refrigerator.
A variation on this is the heat pump, which functions like a refrigerator, but turned inside out. A heat pump is used to heat buildings by cooling the air outside. The evaporator coil is placed outside, as it takes heat from cold air, while the condenser coils are inside, which gives off heat to the warmer air. In this design, the heat |QH| taken inside a building can be considerably greater than the work |W| needed to get it there.
Conclusion
In the bottom line, it is impossible to create a heat engine that completely converts heat to work, i.e. 100% thermal efficiency. It only corresponds to the second law of thermodynamics which states that it is impossible for any system to undergo a process in which it absorbs heat from a reservoir at a single temperature and converts the heat completely into mechanical work, with the system ending in the same state in which it began. Heat flows spontaneously from hotter to colder objects, never the reverse. A refrigerator does take heat from a colder to a hotter object, but its operation requires an input of mechanical energy or work. We can deduce that it is impossible for any process to have as its sole result the transfer of heat from a cooler to a hotter object.
References
Brown, T. L., LeMay, Jr., H. E., Bursten, B. E., Murphy, C. J., Woodward, P. M. (2017, January 1). Chemistry: The Central Science (14th ed.). Pearson.
Ozerov, R. P., & Vorobyev, A. A. (2007). 3 - Molecular Physics. Physics for Chemists. (), 169–250. https://doi.org/10.1016/B978-044452830-8/50005-2
Poredoš, A. (2021, April 25). Thermodynamics of Heat Pump and Refrigeration Cycles. Entropy, 23(5), 524. https://doi.org/10.3390/e23050524
Young, H. D., & Freedman, R. A. (2019). University Physics with Modern Physics (15th ed.). Pearson.