Information Science and Technology
↳ Computers
Anomaly detection
Although there are several security measures in use, professionals in the field of computer security have categorized them into different classes. There are strategies that aid in resisting attacks, spotting attacks, detecting attacks, and detecting attacks as well as strategies that aid in recovering from attacks. Although the focus of this study is anomaly detection, it is important to remember that installing a monitor sensor in one's home is a practical technique to notice an assault.
According to Bruce et al., (2020), the practice of comparing definitions of an activity that is thought to be normal versus an observed occurrence in order to discover substantial differences is known as anomaly identification. In many instances, intrusion detection systems are used to identify attacks (Bruce et al., 2020). These systems operate by evaluating the traffic flow toward a computer-based system. However, computer specialists compare the historical baseline when it comes to anomaly detection.
Anomaly detection, according to Bruce et al., (2020), is a procedure that is predicated on the idea that intrusive or improper behavior deviates from how a typical system is used. Therefore, the majority of anomaly detection systems are able to identify the activity profile of a typical system and then highlight any system events that statistically differ from the established profile. One advantage of identifying anomalies is that it facilitates the abstraction of data related to a system's typical behavior and aids in the detection of assaults, whether or not the system has previously encountered them.
By using metrics generated from system measures including CPU utilization, number and length of logins, memory consumption, and network activity, computer security specialists have created behavior models. The susceptibility of a hacker who manages to enter the system during an anomaly detection learning period, however, is a serious flaw in anomaly detection (Bruce et al., 2020). A cunning hacker might be able to teach the anomaly detection how to interpret invasive events so they appear to be typical system behavior.
A number of approaches have been devised in anomaly detection. However, experts have concerns about these approaches regarding the unauthorized access to source computer devices. For instance, the statistical approach to anomaly detection provides a system that allows the anomaly detector to measure the deviance of the present behavior profile from the original profile (Kibria et al., 2018). In this case, only computer that have been trained normally is learned by the system, which tries to extrapolate anomalous behavior by employing low probabilistic outcomes of the testing computer outcomes.
Concerns here, however, lie with the fact that, false negatives and positives may be generated due to the inadequacy of the statistical measures that are employed. There are also concerns of whether enough normal training computers were collected (Rahul et al., 2020). To add on that, source computer devices unauthorized access covers a range of concerns, like hackers trying to use them as a way of launching an attack that may involve employing collected source computer or information stealing ((Kibria et al., 2018). In network security and performance context, the source of system log as well as network computer flow is regarded as the infrastructure itself, which is another concern.
Different processes have been implemented to govern access rights network infrastructure devices. One of the processes that have been employed by many organizations is to come up with a policy that helps to classify information spelling out the importance of the information stored in the system (Rettig et al., 2019). Detection of anomalies in a network is one of the major deficiencies computer and network security personnel have. The research provides an insight for computer and network security, and to be more specific in the research thesis, sources that provide ample information on securing wireless sensor network ( Kibria et al., 2018). He research which I have decided to use a comparative approach will combine literature from a number of previous and historical researches, to provide a great resource to those who are willing to have solutions to computer and network security issues.
In their article Kibria et al., (2018), discuss DoS assaults against WSN. If deployed in an insecure environment, the wireless network devices and sensors are unable to protect the wireless media from attacks and are susceptible to physical tampering. The use of symmetric cryptography, which uses shorter encryption keys and is arguably superior to public key cryptography when used in sensor networks, is one of the generic security mechanisms suggested by the authors. Each protocol layer's weaknesses and suggested protection measures are described in the article. For instance, on a physical protocol layer, jamming and node destruction/tampering are used to assault the network and sensor devices (Munir, 2021). The remaining defenses strategies include detect and sleep, avoid jammed areas by traveling through them, conceal or disguise nodes, tamper-proof packaging, authentication, and interaction protection. The goal is to use prior research to give a more thorough, full list of protection mechanisms and remedies to security concerns. A study on assaults, security measures, and difficulties in wireless sensor networks is another crucial source.
According to Xie et al. (2018), there are two primary types of computers in computer science: spatial computers and temporal computers. The first step in developing a spatial computer entails creating an intriguing needed series that is not already recorded in spatial storage. Using a spatial computer, the analyst applies spatial information to generate the necessary data. Identifying series is one of several difficulties in the application of spatial computers, particularly in research analysis. According to Mishra & Jena (2021), Clementine and Enterprise Miner are the two commonly utilized tools for managing spatial computers. These techniques are primarily used for the analysis of various spatial computers, including genomic computers and web computers. Latitude and longitude information are stored in the spatial computer. Additionally, it includes coordinates pointing to a location in space.
The spatial computer also has a number of features that help locate various geographic locations and images of those locations. The temporal computer, on the other hand, displays the situation in real time. The temporal computer is seen as transient because it doesn't last for a very long time (Midani et al., 2019). Temporal computers are typically employed for demographic research, traffic management, and weather analysis. The analytics performed during temporal analysis are utilized to pinpoint a problem's root cause, which aids in providing a remedy.
According to the pattern of phenomena that has been investigated, the answer is assured. The temporal computer performs a wide range of tasks, such as computer categorization and comparisons, trend analysis, correlation analysis between computers, computer series analysis, and many other things (Tschimben, 2022). The basic goal of a temporal computer is to pinpoint the temporal series, correlation, and sequence that exist within it and to gather the data necessary to display the computer's behavior over a certain time frame. The computation of numerous main key values at various points in a given time is made possible by the temporal computer, which also distributes the computer's time sequence. The two categories are really different even though they seem to be the same. To start, according to its definition, a spatial computer derives data and correlation from local computers contained within a computer base, whereas a temporal computer only extracts trustworthy data from temporal computers, aiding in pattern recognition.
In connection, the platform for computer science permits forecasting through the use of codes and powerful computers. The created model makes it easier to develop trustworthy solutions to the problem that needs to be solved. The extent of the inputs and the precision of the varying computer acquired are the main determinants of accuracy during modeling. This system makes use of Hadoop (Bruce et al., 2020). Traditional computer bases and statistical tools are not capable of handling great structured and unstructured computer, but platform tools can computer scientists primarily use the platforms for cleaning, computer visualization using statistical analysis, and modeling code, among other tasks.
According to Bruce et al. (2020), business analysts also use the platform to understand their clients' businesses. Replications based on stakeholder information are always supported by the platform. There are related tools from computer science. The computer science tool functions as a platform for computer science to organize, examine, and visualize the computer. The computer science tool can only be used one at a time, whereas the computer science platform can include multiple programming tools. This is the main difference between the two. R is a useful example of a computer science tool. R is open-source and cost-free software that is used for computing and visualizing statistical data (Jha & Sharma, 2021). R has around 9900 packages, including ggpubr, ggplot, tidry, and others that allow computer scientists to conduct analysis, according to Bruce et al. (2020). The integration of R with other languages, including Python, SQL, and others, is quite easy.
Conclusion
It has been highlighted that the field of computer science is expanding quickly in the modern technological environment. As a result of this increase, as mentioned, computer scientists and businesses are facing numerous challenges, particularly in managing the computer. Since computers come from diverse sources, management of them is a concern. Additionally, it has been emphasized that the computer analyst must possess the necessary abilities to control and resolve any computer-related issues that may arise. Additionally, it has been highlighted that computers may classify data in two different ways: spatially and temporally.
Computer analysis and visualization are required for the desired outcomes in each of these classifications. The life cycle of computer science, which starts with computer collecting and ends with computer visualization, has also been noticed. The computer science platform is referenced in relation to supporting various programming tools for analysis. In conclusion, it has been mentioned that various programming languages are utilized for analysis, including R, which has a number of internal packages that aid in computer visualization for improved decision-making.
References
Bruce, P., Bruce, A. and Gedeck, P., 2020. Practical statistics for data scientists: 50+ essential concepts using R and Python. O'Reilly Media.
Jha, P., & Sharma, A. (2021, January). Framework to analyze malicious behaviour in cloud environment using machine learning techniques. In 2021 International Conference on Computer Communication and Informatics (ICCCI) (pp. 1-12). IEEE.
Kibria, M.G., Nguyen, K., Villardi, G.P., Zhao, O., Ishizu, K. and Kojima, F., 2018. Big data analytics, machine learning, and artificial intelligence in next-generation wireless networks. IEEE access, 6, pp.32328-32338.
Midani, W., Fki, Z., & BenAyed, M. (2019, October). Online anomaly detection in ECG signal using hierarchical temporal memory. In 2019 Fifth International Conference on Advances in Biomedical Engineering (ICABME) (pp. 1-4). IEEE.
Mishra, B., & Jena, D. (2021). Mitigating cloud computing cybersecurity risks using machine learning techniques. In Advances in Machine Learning and Computational Intelligence: Proceedings of ICMLCI 2019 (pp. 525-531). Springer Singapore.
Munir, M. (2021). Thesis approved by the Department of Computer Science of the TU Kaiserslautern for the award of the Doctoral Degree doctor of engineering (Doctoral dissertation, Kyushu University, Japan).
Rahul, K. and Banyal, R.K., 2020. Data life cycle management in big data analytics. Procedia Computer Science, 173, pp.364-371.
Rettig, L., Khayati, M., Cudré-Mauroux, P. and Piórkowski, M., 2019. Online anomaly detection over big data streams. In Applied data science (pp. 289-312). Springer, Cham.
Tschimben, S. (2022). Anomaly Detection in Shared Spectrum (Doctoral dissertation, University of Colorado at Boulder).
Information Science and Technology
↳ Modern Technology
Annotated Bibliography on R and Python Programming Languages
Ozgur, C., Colliau, T., Rogers, G., Hughes, Z., & Myer-Tyson, B. (2017). MatLab vs. Python vs. R. Journal of Data Science, 15(3), 355-372.
This paper expounds on R, Python and MatLab effectiveness. The paper has been written in such a way that it is suited for teaching in colleges and universities or any other institution of higher learning. It explains the basic knowledge of all three programming languages while trying to explain who uses the languages mostly. It also gives a list of the advantages and disadvantages of each of the three programming languages that are R, Python, and MatLab. It further discusses the advantages of one programming language over the other and vice versa.
This paper further explains the utilization of the three programming languages, analysis of one over the other and the comparison of a programming language to other programming languages. The examples given in this paper are real-life examples and where the analysis processes are explained in every programming language. Before the author finished writing this paper, he introduces “new” statistical methods where R is now being compared with SAS and SPSS accompanied by good examples that help us who are the readers of this paper broaden our understanding in the field of Data Analysis, Data Mining, and Big Data. At the very end of this paper, the authors of this paper give their experiences on all three programming languages.
Matloff, N. (2011). The art of R programming: A tour of statistical software design. No Starch Press.
In this paper, all that concerns R programming is discussed fully starting from running R program, simulations and doing math in R all the way to installation and usage of R packages and libraries. It has only majored in discussing R programming language and how it can be incorporated with other programming languages like Object Oriented Programming (OOP).
How I can describe this paper in a few words is that the paper has all the information concerning the R programming language. It explicitly explains the debugging facilities used in R and how these facilities are used in this programming language. If the problem has been not knowing how to interface R programming to other programming languages, the problem is solved in this paper where it explains how to interface R with C and C++ programming languages. It also introduces a package where R can be used from Python. This is achieved by installing a package called RPy and following the RPy syntax as explained in this paper.
Chun, W. (2001). Core python programming (Vol. 1). Prentice Hall Professional.
The Author of this paper Wesley J. Chun has precisely explained all that is concerned with the python programming language including definitions, History of python, the features that python programming language has, how to obtain python and he also added the installation procedure of python. The syntax of python and the objects used in python are correctly mentioned and examples are given in this paper that help the readers of this paper deepen their roots in Python programming language. This book is meant for starters, intermediates, and experts of python programming language suitable for teaching at colleges.
In this book, there are advanced topics in part two where other advanced programming languages are incorporated and can be integrated with Python. The programming languages that are discussed in the advanced topics of this book include web programming, graphical user interface programming with TKinter, Multithreaded programming, Network programming among others. In summary, this is the best paper for starting up in learning and understanding Python programming.
Jones, O., Maillardet, R., & Robinson, A. (2014). Introduction to scientific programming and simulation using R. Chapman and Hall/CRC.
The motivation behind the writing of this book is the rapid growth of R programming, its development, and the application of this programming language in a number of fields. The book illustrates the application of R programming in different disciplines of the education curriculum. The book is helpful and of use to the developers and programmers who use the R programming language in developing software. It also helps those doing data analysis and statistics.
The authors of this book have majored in simulation using R programming starting from the introductory part of R programming and later in the inner topics, this book describes how simulation can be done using R programming. In addition, the book has explained some key terms in statistics and data analysis also explaining how R programming is used in these other branches of science. The summary of this book is that it gives a clear understanding of how R programming is used in Data Analysis and Statistics.
Chambers, J. (2008). Software for data analysis: programming with R. Springer Science & Business Media.
This paper starts with an introductory part of the principles and concepts used in R. After the introduction, the paper now goes deeper to explain the usage of R programming and its evaluation. The examples that are given in this paper help the statisticians in getting to know how to plot different plots using R programming. In this chapter, the packages and libraries are explained then a guide on how to find and install these packages is given.
The authors of the book did just end there, they further give real-world examples on the application of R programming in businesses and industries. They explain that R programming has many businesses in gaining huge profits due to the statistical approach of R programming that helps them learn the trends in the market and make a quality decision and at times predict the likeliness of events in the future using R programming tools and techniques.
Sanner, M. F. (1999). Python: a programming language for software integration and development. J Mol Graph Model, 17(1), 57-61.
This paper starts by giving a definition of what python is and a brief history of python programming language. In its introductory part, it adds the python extensions, how python performs and finally how python programming language functions as a tool for integration. In the part where python is explained as a tool for integration, the author explains how the functionality of python is being implemented and how advantageous this is on the independence of the platform.
This paper brings in a new term used in the python programming language called Wrapping. Quoting from this paper, “this is a process where an existing C, C++ or Fotran code is accessed from the python programming language.” These existing codes can be re- implemented using python. Wrapping a code happens when just re-implementing it does not make sense. A better example is given in this paper to explain how to wrap code and how this wrapping occurs.
Lutz, M. (2013). Learning python: Powerful object-oriented programming. " O'Reilly Media, Inc.".
This paper is quite interactive since it starts with a question and answers session. The questions in this section try to answer all the questions or rather the frequently asked question concerning python programming language. These questions are answered in a way that even a first-time learner in python can understand and familiarize himself or herself with this programming language.
In the preceding sections of this paper, the python programming language is now described in-depth with the different functions used in python discussed and very readable and understandable examples given. All that it takes to be a python guru is found in this paper and is a good paper in the journey of becoming a python programming language expert. In conclusion, this paper has exercises after every section that helps the readers of the paper put in practice what he/she has learned.
Fox, J. (2005). Getting started with the R commander: a basic-statistics graphical user interface to R. J Stat Softw, 14(9), 1-42.
The paper starts by stating how different R is from S-PLUS and how the statistical graphical user interface is not incorporated in R. What happens is that R programming language includes the tools used in building graphical user interfaces. I will rate this paper highly since it gives the screenshots of how to install R and the packages and libraries too. This guides the users are installing R for the first time in their computers.
In this paper, the audience that the author has focused on is the newbies of the R programming language and also the newbies of the R graphical user interface. With this paper, anyone can install and use R without any problem. The paper concludes by giving practical examples recommended for new users of R to run before starting to run huge programs in R. Therefore, for any newbies in R, this is the recommended paper for perusal.
Information Science and Technology
↳ Modern Technology
Advantages and Principles of OOPS (Object-oriented Programming) in C++
Introduction
Object Oriented Programming or OOP is a software programming concept that is entirely built on ‘objects’. OOP presents the application developer and the customer with different advantages. Object orientation tends to overcome other development-related challenges and superiority of software items. The innovative programming technology delivers maximum flexibility for engineers, higher software efficiency and reduced operating costs [CITATION Jas08 \l 1033]. Through descent, obsolete coding can be removed and current groups can be expanded. We may build programs from regular functional modules that interact with each other instead of beginning to compose the application. This saves production time and improved profitability [CITATION Per13 \l 1033]. This essay explains the key principles of OOP and its benefits for programming.
Key Principles of Object-oriented Programming
There are fundamentally four key principles that mark C++ language an object oriented paradigm in world of programming. These are Data Abstraction, Inheritance, Polymorphism and Encapsulation. These are likewise known as four tenets of object oriented programming.
Encapsulation: This infers to the mechanism of hiding of information execution via limiting right to use public methods. In this accessory method is made public and instance variable is kept private to accomplish it [CITATION PBK14 \l 1033].
Abstraction: Abstract term infers to an idea or concept that is not related to any specific instance. Utilizing abstract Interface/Class one can express the purpose of the class instead of the real execution. In a manner, a class must not identify the inner specifics of another in mandate to utilize it, simply acknowledging the interfaces must be decent enough.
Inheritance: This articulates “has-A” or/and “is-A” relation among two entities or objects. With inheritance, in derived class, one can reutilize the source code of current super classes. For example, in Java program, notion of “Is-A” relation is built on inheritance in class (via extends) or implementation of interface (using implements). Let’s say, File Output Stream “Is-An” Output Stream that simply reads from the file [CITATION Per13 \l 1033].
Polymorphism: This principle infers single name multiple forms. It is moreover of two kinds – dynamic and static. In static polymorphism it is accomplished by means of method overloading concept whereas dynamic polymorphism is accomplished with method overriding. It is closely associated with inheritance. One can compose a program code that functions on the Superclass and it will function with some subclass kind too. For instance, collections framework in Java has an interface known as java.util.Collection, TreeSet and ArrayList are 2 distinct execution of such interface [CITATION Per13 \l 1033].
Benefits of OOP in C++
OOP has turned out to be an essential portion of software creation. It’s simply due to wide popularity and omnipresent nature of object-oriented languages like C++ and Java; one can’t build application for mobile devices unless one comprehends the OOP methodology. The similar things apply for the severe web development, owning the popularity of languages like PHP, Ruby, and Python. These are the major benefits of OOP for programmers [CITATION Per13 \l 1033].
Modularity for Simple and Easy Troubleshooting: While functioning with OOP language, one knows precisely where to see. In this objects or instances are self-comprised, and all bits of functionality do its individual stuff whereas departing off the additional bits unaided. Likewise such modality permits an IT squad to function on many entities concurrently whereas reducing the odds that an individual may duplicate somebody else’ function[ CITATION Jas08 \l 1033 ].
Reutilization of source code with inheritance: Assume that with the Car entity, a colleague requires a RaceCar entity, and an additional requires a Limousine entity. Everybody creates their entities distinctly however find commonalities amid them. Actually every object is actually simply a distinct type of Car. It is whereas the inheritance approach preserves time: make a standard class e.g. Car, and formerly describe the subclasses (Limousine and RaceCar) that are to take over the standard class’s characteristics [CITATION Jas08 \l 1033].
Efficient Problem-solving: OOP is frequently the utmost pragmatic and natural method, as soon as one gets the sling of it. OOP language permits one to fragment the software into small sized problems/modules that one formerly can resolve – single object instance at a moment [CITATION PBK14 \l 1033].
Conclusion
In brief, OOP paradigm offers a clear and concise modular programming structure. The OOP concepts are standard and decent for describing abstract kind of data types. In this execution specifics are concealed from additional program elements. It is simple and easy to change and maintain current source code. OOP supports to execute real-time case scenario. With the utilization of data abstraction and hiding method, one can filter out restricted information to revelation that infers one is keeping protection and offering mandatory information to see. It is a common idea to break down a complicated issue into tiny pieces or distinct parts. This is the action OOP is interested in, because it splits the software code down into bite-sized objects. The information hiding concept lets programmers create protected applications in certain areas of the system that cannot be entrusted with code. The sophistication of coding can be handled easily. Larger and smaller structures can quickly be converted and object-oriented applications.
References
Kotur, P. B. (2014). Object Oriented Programming with C++ . Sapna Book House (P) Ltd.
Mohan, P. (2013). Fundamentals of Object-Oriented Programming in Java. Create Space Independent Publishing Platform.
Singh, J., & Kaur, P. P. (2008). Object Oriented Programming Using C++. Technical Publications.
Information Science and Technology
↳ Artificial Intelligence
Smart Nation: How AI and Robots are replacing human jobs in Singapore
Is AI displacing human jobs?
Over time, the world witnessed the revolution of Supply Chain Management (SCM) in the name change, from Industrial Management to Production Management to Operations Management, to be the present name (Soni & Soni, 2019). With the changing in designation, the scope of the field is also evolving thanks to advanced technology, especially Artificial Intelligent (AI). According to Balan (2019), “Artificial intelligence refers to a broad group of technologies, among which range the following: computer vision, natural language, virtual assistants, robotic process automation, and advanced machine learning” (p.17). Recently, AI adoption to SCM has gained the public attention due to its diverse of application and potential. AI can help boost work productivity, cut costs, and improve work efficiency. In short, AI makes human life easier.
However, it raises a concern about the job displacement and the need of workforce declining. This essay will discuss the innovation of AI to SCM in companies in terms of Inventory, Warehousing, and Transportation, and conclude with the concern that AI is more and more displacing human jobs.
Our world had gone through a severe Covid-19 pandemic which greatly impacts on driving enterprises to change their supply chains management. Recently, a survey of 200 senior- level supply chain executives conducted by Ernst & Young Canada (2023), in late 2020 and September 2022, has found that strategic planning can help the enterprises more resilient, collaborative and networked with their customers, suppliers and stakeholders. During the survey, 52% of executives have disclosed that they started their strategic planning up to the year of 2025 on robotic warehouses and stores, transportation solution and fully automatic planning.
Typically, Amazon, IBM, or Walmart, etc. are considered pioneers in data analytics, robotic warehouse and delivery drone.
Approaching AI in inventory control and planning can reduce sustainable cost and increase revenue of enterprises. As AI can collect and analyze automatically historical, present, and future data, it provides precise and reliable forecasting demand, which allows enterprises to optimize their sources in terms of inventory or customer orders (Dash et al., 2019). Advertising campaigns or entertaining programs on social media are being running with underlined machine learning engine to analyses customer’s activities (Bughin et al. 2017, as cited in Dash et al., 2019). From there, the enterprises can have evidence of consumers’ near-real-time demand for their inventory planning. Walmart, one of the largest retailers, is another example of investing on Big Data analytics to catch customers’ preferences and behaviours for inventory management to reduce overstock and remain sufficient stock on most in-demand products (ProjectPro, 2023).
However, an occurring challenge that may put Walmart on risk of falling behind its competitor is the limited number of professionals with experience in cutting-edge analytics and programming language like Python and R (ProjectPro, 2023). Therefore, any new team members join Walmart must participate in their designed program to gain the necessary knowledge in big data analytics. To gain such remarkable result in the industry, enterprises require experts in understanding and utilizing analysing data. Retraining workers to capture such dynamic complexity database is also a fundamental.
Another advanced branch of AI, robotics, is also a promising approach to help perform heavy and repetitive tasks with high precision and speed. Thus, the use of robots in warehouse potentially helps increase efficiency and productivity, reduce manual labour costs (Dash et al., 2019). Let’s look inside Amazon’s warehouse where there are more than 200,000 mobile robots working alongside with hundreds of thousand human workers. Their mobile robots carry shelves of products from worker to worker to help pick, pack and ship the items. This is operating in a massive warehouse where Amazon workers used to walk more than miles a day for order-picking (Rey, 2019). Higher expectations may require in workers to work along with automated tasks in the warehouse. “The robots have raised the average picker’s productivity from 100 items per hour to a target of 300 to 400”, (Scheiber, 2023). Also, there are always issues during performing of Amazon picking and stowing robots that only can be solved by workers (Rey, 2019). Hence, as per Amazon CEO, “Amazon announced plan to upskill 100,000 of its US employees, including warehouse workers” (Bezos, 2023). This is a great chance for company’s employees to develop their technical skill to move into better-paying jobs.
Additionally, it is essential to adopt AI and machine learning to avoid cost adding and shorten delivery time in logistic management. At the meantime, AI and machine learning can analyse and predict the duration of delivery. This helps to facilitate the delivery to customer timely and meet the requested delivery date as agreement (Dash et al., 2019). Also, other advanced technology in transportation solution, such as drone delivery, driverless truck, etc., which is safer, faster and economic for enterprises. After Amazon successfully delivered a pilot to Cambridge, there is a surge in this area (Dash et al., 2019). However, Amazon’s drone still cannot cross the stress and cannot come near or fly over people. It needs six people to monitor each flight, including observers and ground station operators, which proves that the innovation is still on experiment procedure and requires a lot of workers involving on its performance (Hollister S., 2023). Therefore, Human intervention is still needed to monitor and troubleshoot issue to ensure the ongoing operation of enterprises.
Overall, the impact of AI in SCM has been and will be significant. AI is step by step replacing non-customer-facing jobs in a positive way. It not only brings benefit to a company but also creates more job opportunities to company’s workers. By investing state-of-the-art technology to warehouse, inventory or transportation, company can reduce physical workload, minimize human error, eliminate manual jobs, and save time, which is benefit to the business development and expansion. At the same time, workers have precious opportunity to be retrained and to upgrade themselves to an advanced level. By enriching knowledge in data analysis, basic coding, programming language, engineering, etc., employee will be able to master in AI implementation. Thus, to achieve the best success of technology adoption to the SCM, it should be the collaboration from both AI and human workers.
References
Dash R., McMuntrey M., Rebman C., & Kar K. U. (2019). Application of Artificial Intelligence in Automation of Supply Chain Management. Journal of Strategic Innovation and Sustainability 14(3). http://www.m.www.na-businesspress.com/JSIS/JSIS14-3/DashR_14_3_.pdf
Ernst & Young Canada (2023). Ernst & Young Global Limited. Research shows severe disruption through the pandemic is driving enterprises to make their supply chain more resilient, collaborative and networked. https://www.ey.com/en_ca/supply-chain/how- covid-19-impacted-supply-chains-and-what-comes-next
Helo P. & Hao Y. (2020). Artificial intelligence in operations management and supply chain management: an exploratory case study. Production Planning and Control 33(16), 1573-1590. https://www.tandfonline.com/doi/epdf/10.1080/09537287.2021.1882690needAccess=true&role=button
Holister S. (2023). The Verge. Amazon’s delivery drones served fewer than ten houses in their first month. https://www.theverge.com/2023/2/2/23582294/amazon-prime-air-drone-delivery
Rey, J. D. (2019). Voxmedia. How robots are transforming Amazon warehouse jobs – for better and worse. https://www.vox.com/recode/2019/12/11/20982652/robots-amazon-warehouse-jobs-automation
Ph.D. Balan C. (2019). Potential influence of artificial intelligence on the managerial skills of Supply Chain Executives. https://www.srac.ro/calitatea/en/arhiva/supliment/2019/Q-asContents_Vol.20_S3_October-2019.pdf#page=9
Projectpro (2023). How Big Data Analysis helped increase Walmarts Sales turnover? https://www.projectpro.io/article/how-big-data-analysis-helped-increase-walmarts-sales-turnover/109
Schieber, N. (2019). The New Your Times. Inside an Amazon Warehouse, Robots’s Ways Rub Off on Humans. https://www.nytimes.com/2019/07/03/business/economy/amazon-warehouse-labor-robots.html
Soni R.G. & Soni B. (2019). Evolution of Supply Chain Management. Ethical Issues for Leaders. The Competition Forum 17(2). https://d1wqtxts1xzle7.cloudfront.net/64517433/2019_Competition_Forum_Vol_17_No._2-1-libre.pdf?1601011458=&response-content-disposition=inline%3B+filename%3DEditor_in_ChiefAbbas_J_Ali_Indiana_Unive.pdf&Expires=1684621006&Signatur e=g~n6WrnV2LdOvjIqQQBLYl465IrbgdzYvF9za4une4FAynWxGvS3BqjCRpqOwe9b I2ikwESa7hksHvhR350DXn1a23WYgkeEciU4YwS0fQVjoSPuK0n51EWMDWF5gJb 60B~we7hSxKcbHsJ09s2acAUaywNhZj824G3Iiutx39w5qnCKr3rk~Pcm0Y~9FSmOZ lvnHwm9KvwvjbJkk2Wp2OohE0ZxI1oZjIgxQSAzFvX62L8- 9OYJeQNJfXgYvK4eTijfsBzEV5~YYoQy19A4g35HkeeVTUam9fWZnEs9b5manicv DpdlrVRG-EoUimzaoo5xmaQI1DmOIOAdBqy2PA &Key-Pair- Id=APKAJLOHF5GGSLRBV4ZA#page=63
Information Science and Technology
↳ Computers
A hybrid Approach of compiler and interpreter
Abstract
This study investigates the fundamental concept of compiler and interpretation and highlights the requirement of compiler for programming language. It also discusses several of the recent breakthroughs in the suggested study. Almost all realistic programs nowadays are authored in higher-level dialects or assembler, and then converted to machine code by a compiler and/or compiler and linker. Most of the programming language is in demand due to their accessibility but due to lack of optimisation, they take proportionally big period and space for implementation. Also, there is no technique for code reduction; the code size is bigger than what really is required owing to repetition in code notably in the names of identifiers (Danvy, 2013).
INTRODUCTION
Nearly all computers are designed to carry out relatively basic instructions in order to lessen the complexities of technology design and building (but do so very quickly). You can create a computer programme using programming languages by collecting a small number of basic instructions. To avoid this, most program is written using a high-level computer program. High-level code must be translated into machine code in order to be executed by the CPU. A compiler (or a linker) or an implementer could indeed perform this transformation, the latter normally generating binary code, machine language, that could be filtered to be directly ready for implementation by computer hardware, whereas toolchains would then usually move ahead by first generating a transitional binary document called instrument code. In order to bridge the gap between this languages and the smartphone's assembler, various mechanisms of translation are necessary. It's here that the compilers and translator come in to play (Park et al., 2018).
Problem Statement
When a programme language is a high computer language is fed into a converter, it is converted from that high-level language into the low-level assembly language needed by machines. The computer would also look for and detect apparent programming errors throughout this procedure. The speed at which programmes may be created is directly correlated to the use of high-level computer languages. The following are the most important justifications:
Issues and barriers
The notation employed by computer languages is more human-like than that of computer language. In certain cases, the translator can catch apparent faults in code. In general, programmes written in high-level languages tend to be more compact than those written in assembly language. One of the advantages of adopting a high-level language is that the same programme may be built into several different device language and hence execute on many multiple computers.
Relevance and significance
Programming in high-level languages and then translating them to machine languages might well be quicker than programming in machine languages that are hand coded. As a result, machine language is still used in certain time-critical systems. A competent compiler, on the other hand, would be capable of reaching near to the efficiency of hand-written assembly language when converting well-structured programmes.
Research Questions
What are the phases of compiler?
Relation between compiler VS Interpreter?
Chapter 2 Literature review
A compiler is a bit of software that converts source code to machine code appropriate for a certain platform. Before the program is run on the computer, the source code is first interpreted by the compiler, and then it is converted into the machine's native language.
Assembling a software is a multi-step procedure that consists of two distinct phases: the compiling phase and the linking phase. The process of compiling results in the production of an intermediate file, which is often referred to as an object code file (Park et al., 2013).
This file contains the instruction codes that are the fundamental building blocks of the application's capabilities. Every line of the source code has been paired with one or more instruction codes that are relevant to the processor that the program will run on. C and C++ are two examples of languages that may be used to write compilers. During the linking process, an executable output file is created by linking the object code file with any other object files that are required by the program (Park et al., 2018).
This is done with the help of a linker. The result of the linker is an executable file, which means that it will only execute on the windows os that the program was built for. A front-end layer, an intermediary layer, and a back-end are the three layers that make up the internal framework of a compiler.
The compilation process may be broken down into a few distinct steps, each of which has a well-defined user interface. The stages carry out their tasks in the order presented, with each step using the output of the phase that came before it as its input. The following is a standard breakdown of the process into phases:
Chapter 3 Methodology Approach
COMPILER
When a programme language is a high computer language is fed into a converter, it is converted from that high-level language into the low-level assembly language needed by machines. The computer would also look for and detect apparent programming errors throughout this procedure. The speed at which programmes may be created is directly correlated to the use of high-level computer languages. The following are the most important justifications (Park et al., 2013). The notation employed by computer languages is more human-like than that of computer language. In certain cases, the translator is able to catch apparent faults in code.
THE PHASES OF COMPILER
It's a good idea to arrange the work since developing a compiler isn't easy. This is often accomplished via the use of a modular programming stage with well distinct steps and interfaces. Philosophically, each phase operates sequentially and takes as its input the product from the preceding phase. However these stages are commonly interspersed.
Lexical analysis:
The first step in understanding the programme text is to read it. In computing, each token correlates to a representation such as a global variable, keyword, or integer in the computer program.
Chapter 4 Findings, Analysis, Synthesis
COMPILER VS INTERPRETER
When a high-level language (like C or Java) is translated into binary form by a Compiler or Interpretation, it accomplishes the same thing. To conduct different activities, they are the program that controls the high-level programmes and codes. Various high-level languages need multiple kinds of processors and processors. Both compilers and interpreters, however, share the same goal, but they vary in the methods they use to do it; To create a computer program, compilers and interpreters may work together. In certain cases, the computer produces intermediate-level code that is understood instead than converted to bytecode. In certain systems, a programme may even be divided into sections that are translated to machine code, sections that are converted to programming language, and sections that are translated to realtime code, that are all then translated. Each option involves a trade-off between speed and available floor area. However, each stage of translations enhances the execution performance of the programme by reducing the size of the code. An interpretation is especially helpful in the early stages of the development process when the speed at which a new version of the programme may be tested trumps efficiency. And since processors don't have to spend as much time preparing the programme for implementation, they may begin executing it more rapidly. Mistakes may be more exact and instructive since interpreters operate on a representation closer to the programming language than program code. Of obviously, in the real world, the practitioner of the linguistic system has a wider range of options, ranging from one extreme to the other. A wide range of implementation may be found in reality due to a variety of trade-offs between compile speed, execution performance, space utilisation, responsiveness, and other variables (Park et al., 2018).
Chapter 5 CONCLUSION
So it's clear that each has its own set of uses, benefits, and drawbacks. Specific language, use, needs, and so on, both may operate separately. Another possibility is that certain stages of the compilation that the interpreter’s lacks like optimisation may be worked on and integrated into the interpreters so that it would produce results that are more efficient and use less memory. In order to create a compiler that can comprehend languages such As javascript, Python, and Perl, this strategy relies on the following investigation. Understanding will be improved as a result of this effort. Both the size and the computational burden of this will be much reduced. The accessibility of programming language makes them popular, but the absence of efficiency means that they take a long time and a lot of space to execute. In addition, there is no way to reduce the amount of code. Repetition in the code, particularly in the names of variables, causes the code to be bigger than it needs to be. We want to develop a Computer that incorporates the optimal control phases of a compiler into the development pipeline of understandable code and afterwards produces the optimised source code for the intermediate representation, which would be more efficient in terms of trying to run memory consumption space than the original code. The interpreters for Jav Script, which was already created to handle the problem, will be used as a benchmark in our proof-of-concept Compreter for a subset of JavaScript. To cover the gap between translators and translators, we believe that increasing the computational capability of technology is the best answer (Danvy, 2013).
References
Danvy, O. (2013). A Journey from Interpreters to Compilers and Virtual Machines (Vol. 2830). Springer Berlin Heidelberg.
Park, S., Latifi, S., Park, Y., Behroozi, A., Jeon, B., & Mahlke, S. (2022). SRTuner: Effective Compiler Optimization Customization by Exposing Synergistic Relations.
Tao, X., Pang, J., Xu, J., & Zhu, Y. (2021). Compiler-directed scratchpad memory data transfer optimization for multithreaded applications on a heterogeneous many-core architecture. Journal of Supercomputing, 77(12), 14502–14524.
Feng, J. G., He, Y. P., & Tao, Q. M. (2021). Evaluation of Compilers’ Capability of Automatic Vectorization Based on Source Code Analysis. Scientific Programming, 1–15.
Li, J., Cao, W., Dong, X., Li, G., Wang, X., Zhao, P., Liu, L., & Feng, X. (2021). Compiler- assisted Operator Template Library for DNN Accelerators. International Journal of Parallel Programming, 49(5), 628–645.
Information Science and Technology
↳ Modern Technology
A comparative study of Python, Java and C++ Programming Languages
Abstract
Over the last few decades, we have witnessed an exponential growth in programming languages, primarily due to the increase in information technologies and increased software demand. Also, continued research has improved how we program and develop software in the modern information age. This paper aims to is a comparative study of some of the three widely used programming languages, namely Python, Java, and C++. Typically, C-based programming languages have dominated the software development arena and have used to develop sophisticated systems across the globe. Java is estimated to be the most applied programming language around the world, with over 3 billion devices running Java. However, C++ and Java are experiencing reduced popularity with the introduction of Python in the programming arena. Python provides admirable features that solve modern-day programming languages, thus attributing to its extensive popularity. This paper advances through chapter one to chapter five. Chapter one gives a detailed overview, such as the background of the study, the problem statement, the objectives of the study, and to research questions that will guide the study. Chapter two provides a detailed literature review and some of the similar works associated with this paper. Section three describes some of the methodologies used in the study. Chapter four provides an analysis of each language's features, while chapter five provides a summary of the whole paper.
Chapter 1Introduction
Introduction
Software engineers, scholars, and programming experts define programming language as a constructed language or computer language designed and developed to help software developers to communicate commands to a computer or a machine. Typically, programming languages are used to control a computing device's behavior by communicating instruction to the machine. Due to the exponential growth in information technologies, there has been tremendous growth in the development of programming languages. For instance, programming languages have five generations from the first time they were introduced in the early 1950s (Oguntunde, 2012). The growth of programming languages has seen software development shift from an era of assembly language to an era where computers are being designed to solve problems in their environment without the programmers.
Overview of Java Programming Language and its significance in the software development
The Sun Microsystems team lead by James Gosling was the first developer to work on Java in 1991. The original version of Java (Java 1.0) was developed to develop systems for home appliances and was released in 1995. After the release of Java 1.0, the platform promised to deliver a Write Once, Run Anywhere (WORA) technology that could eliminate the high-cost runtimes experienced with other typical languages. As of today, Java has released over eight versions of the platform. One of the Java's latest standard editions is Java version 8 released in March 2014, with Oracle indicating on releasing Java 9.0 in a short while. However, Oracle recommends Java version 7 update 51 as the most suitable version for software development and writing of code. Due to the extensive application of Java across numerous platforms, Oracle has produced various configurations to suit each platform's needs and demands. For instance, Java 2 Micro Edition (J2ME) was developed to primarily support the development of mobile applications, while Java 2 Enterprise Edition (J2EE) was designed to ease the development of enterprise applications.
Some of the key features of Java are:
Object-oriented – unlike other typical programming languages, the basic unit of a Java program is an object. Objects in Java allows easy scalability and reusability of code (Oguntunde, 2012).
Platform independent – during the compilation process of Java, the platform is not compiled depending on the specific machine code requirements. Instead, Java has compiled into byte code that the Java Virtual Machine interprets on the machine on which Java is being run.
Security – Java programs support the use of public-key encryption to configure authentication mechanisms. Also, Java supports the development of tamper-free software.
Multithreaded – when using Java, it is possible to develop two or more executable programs to execute tasks concurrently. This feature is applied extensively to enable programmers to build interactive systems that can do multiple tasks simultaneously.
Portable – since java programs are platform-independent, the programs can be moved one platform to another more smoothly.
Architecture-neutral – the object file generated by the Java compiler is architecture-neutral. This means that the compiled code can be run on numerous platforms, and the code can be executed by various operating systems and processors. All that is required to run a java program in any architecture is a java runtime system.
Overview of C++ and its importance in software development
C++ was developed as an improvement of C, and it was first worked on by AT& T Bell labs in 1979. Generally speaking, any C program can be described or considered a legal C++ program, meaning that C++ is a subset of C. C++ combines the features of both the low-level languages and high-level languages, and it is typically considered a middle-level language. C++ is one of the oldest programming languages still in use today. The language is used extensively in numerous domains such as a high-performance server, entertainment software such as video games, system software, embedded software, device drivers, and application software. C++ is also widely applied in research and has had a significant influence on the development of other programming languages such as Java (Oguntunde, 2012). One of C++'s most notable features is its speed and provision of different programming styles to support the development of software and systems. When dealing with large projects, C++ can be configured to support object-oriented programming. Some of the typical features of the language are: C++ supports generic programming, it is case sensitive, it is statically typed and compiled, and it is a free-form programming language.
Overview of Python and its significance in software development
Although many still consider Python as a scripting language, Python is a dynamic programming language that can be used to develop sophisticated programs. For instance, Python is being applied by developers to write programs for some of the world's fastest computers. Python is derived from many other programming languages such as SmallTalk, ABC, C++, Modula-3, Unix shell, Algol-68, and many other scripting languages. Python was developed between the late 1980s and the early 1990s by Guido van Rossum. The modern-day version of Python was established in the Netherlands by the National Research Institute for Mathematics and Computer Science. Python is different from Java and C++ in many aspects. For instance, Python's syntax does not use semicolons, but rather uses whitespace. Also, in other programming languages, the programmer must declare variables and define their datatype (Van Rossum & Drake, 2011). When using Python, the programmer uses variables as objects, meaning that there is no need to declare their datatype. Pythons guide developers into writing readable code and reduces the amount of time required. One of Python's distinguishing features is that the language is easy to read and easy to learn. The language is also scalable and portable from one platform to another.
Problem statement
All programming languages offer various strengths and weaknesses that motivate the programmer to pick them during a software development task. The use of API to implement parallelism is one of the most advanced features that are on some of the existing programming languages. The advancement in technology is making it challenging for young and novice developers to choose the most effective programming language to use. One of the critical challenges is investing time in learning a new programming language and implementing the knowledge in a software development platform under certain constraints.
Goals
This paper provides a foundation for programming language paradigms by comparing the three most prominent and extensively used programming languages. This paper also aims to identify the distinguishing features between C++, Python, and Java and analyze which of the three programming languages gives the best performance for any given instance.
Research questions
What are the distinguishing features of C++, Java, and Python programming languages?
How does the usage cost of the three programming languages differ?
What is the programming Domain of the three languages?
What are the programming paradigms used by the three programming languages?
How do the three languages compare in terms of portability, simplicity, and readability?
Relevance and significance
As stated above, it is challenging for beginners to distinguish between the three prominent programming languages. Also, it is essential for software developers and computer scientists to distinguish the three prominent programming languages. The insights provided in this article offer a robust preparation for selecting the most appropriate language to learn and reducing the inconveniences of learning an unsuitable language depending on one's needs and requirements. On an overall note, this study will help one to choose and learn a programming language that will fit the needs and requirements of his or her software development demands.
Barriers and challenges
The exponential growth and development in the three programming languages posed a challenge since it was difficult to compare the three languages at any given point in time. I had to compare the latest version of the technologies. Additionally, Python is still an immature technology that is still under development, mainly due to its application in data mining. As a result, I had contradicting perspectives about the language, and I had to research an extensive amount of articles to identify Python's unique features in software development.
Chapter 2 Literature review Generations of programming languages
In total, there are five generations of programming languages that are described by time sequence.
Machine language
Machine learning is the first generation of machine learning, and it appeared in the early 1950s. As the name suggests, machine language was written in machine language, that is, ones and zeros, and it was challenging for human-being to understand the language. As a result, the language was prone to errors that limited its functionality. Another critical disadvantage of machine language is machine dependency. The language was developed to meet each specific processor's demands and requirements, meaning that the scientists had to create a different version of the language for every CPU (Ogala & et al., 2020).
Symbolic assembly languages
The symbolic assembly language is the second generation of programming languages, and it simplified the complexity of machine languages by using symbols to represent the ones and zeros. The assembly language operated at a higher abstraction level compared to machine languages and used number combinations and other symbols such as the dollar, portions of words, and percent to create instructions. The key challenges limiting symbolic assembly languages were their hardware dependency and lack of portability, meaning that software developed using assembly language could not be moved from one processor to another.
Problem-oriented languages
The third-generation languages were developed between the 1960s-1980s, and they were the first languages to be referred to as high-level languages. These languages used near-English words to develop commands and relied on compilers to convert the code into machine language. The conversion relied on compilers to match the English words with their machine equivalent. One of the distinguishing features that differentiated third-generation languages from the prior generations is that each programming language in this generation had a compiler or an interpreter. Additionally, the languages were relatively quick to execute after they were compiled (Ogala & et al., 2020). One of the critical challenges in this generation was different types of source code was needed for every different processor.
Non-procedural languages
The distinguishing feature with fourth-generation programming languages is that they are more concerned with the problem being solved than how the actual coding will be done. Fourth- generation programming languages are friendly, are independent of the operating System, and the processor can be used by non-programmers, are portable, and have intelligent options that automate various tasks during the software development process. The most notable programming languages in this generation include MYSQL and SQL.
Fifth-generation programming languages
The 5GL programming languages are still under development and rely on modern-day technologies such as artificial intelligence and machine learning. The 5GL programming languages will automate the generation of code and creating instruction to solve a problem. These languages will require minimal supervision or interactions with programmers. The languages in this generation will have the capacity to think for themselves and address challenges that would otherwise prove to be challenging to solve using programming languages in other generations.
Time Comparing between Java and C++ software
Another study by (AlHeyasat & et al., 2012) provides a detailed study that compares the flexibility of Java and C++ in executing some given tasks. The study focused on determining the time needed to run some given algorithms and determining the execution's swiftness and efficiency. The scholars used the same algorithm to determine which programming language was useful in than the other. in simpler terms, the results of which software is better were based on the time the two software took to execute the same algorithm. The study found that Java took an average of 500 microseconds to execute the algorithm while C++ to an average of fewer than 450 microseconds to run the same algorithm. The conclusion of the study indicates that although Java is a robust language, C++ is more effective in executing programs since it requires less time to compile and run an algorithm compared to Java.
Chapter 3 Methodology
Due to the time limitation in this research, I relied on the comparative analysis to compare the three programming languages. The primary goal of this analysis study is to identify the fundamental and advanced features of Python, Java, and C++ to determine their distinguishing factors and their suitability to be applied in different programming environments. Additionally, I carefully reviewed each language's advantages and disadvantages and the problems each language can solve. This comparative analysis focused on identifying the distinguishing features between the three programming languages using the following criteria:
Readability
Programming paradigm
Programming Domain
Portability
Usage cost
Programming environment
Chapter 4 Findings and analysis
Programming domain
Software development has advanced exponentially to affect every aspect of our lives. Among the various applications of software development are in business applications, systems programming, and scientific applications. Java and C++ standout as hybrid programming languages since they are used in almost every aspect of programming. As a result, these two languages have been extensively applied in software development and have played a significant role in the development of other programming languages. The two languages have data structures that can be applied in a wide range of applications in the development of scientific and business applications (Foster, 2014). One of the key differences between the two languages is that C is typically used for large projects, while Java is used for relatively smaller projects. For instance, C++ is used in the development of operating systems and other complex software programs such as the Symbian Operating System and Linux Operating system. Java plays a minimal role in the development of such systems, and not a single operating system that has been developed using entirely Java.
Python is also a hybrid language that is typically used as a scripting language for the web application. However, Python also has the capacity to support the development of standalone software programs that can be executed independently. However, Python is not as widely applied as Java and C++, and its yet to be used in the development of a large project such as an operating system.
Programming paradigm
The programming paradigm of a language provides details on the design characteristics that must be followed during the development process. In other words, the programming paradigm provides details on the styles used to write instructions. Java programming languages support the use of various programming paradigms, such as object-oriented, reflective programming, and structured programming. Typically, Java is used due to its ability to support the object-oriented programming paradigm (McMaster & et al., 2017). In an object-oriented paradigm, messages are passed to objects and the basic unit of the program in the object. Objects have state and can do something within the software. Structured programming means that the programs have nested control. Java also supports the imperative programming paradigm meaning that the commands are written as a sequence of instructions. In imperative programming paradigm, the commands are written step-by-step and are also interpreted in the same format.
The following code samples indicate the difference between the three programming paradigms.
Object-oriented programming
Result = [];
For a in name {
If a.lastname.length>4{ Result.add(a.lastname.toUpper);
}
}
Return result.sort;
Structured programming paradigm
Result = [];
For i=0; i<length(lastname); i++{ A=lastname[i];
If length(a.lastnale)) >5 { addToList(Result, toUpper(p.name));
}
}
Return sort(result);
Imperative programming
Result = []
I = 0
Start:
Numlastname = length(lastname) If I >= numlastnale goto end
A = lastname [i]
Namelength = length(a.lastname)
If namelenght = toUpper (p.lastname) addToList (result, upperName)
next:
i = i + 1 goto start end:
return sort (result) just like Java, C++ and Python are multi-paradigm programming languages that various programming paradigms. For instance, C++ supports generic programming, structured, object- oriented, and functional programming. C++ uses procedural calls to support the imperative programming paradigm. Both C and C++ have similar programming styles. Similarly, Python also supports object-oriented and structured programming styles. Python is a hybrid language and based on its design characteristics, and it supports other programming paradigms, such as aspect-oriented programming and functional programming.
Readability
The readability of programming language is determined by the consistency of its rules and the clarity of the keywords used in the language. Java is relatively easy to learn and understand since it requires the programmer to understand only 50 keywords. As a result, the readability of Java is very effective and impressive, especially since the keywords are not complicated and are consistent. The consistency of Java is also extended to its coding rules. The use of operators, coding conversions, importing libraries, and dealing with exceptions in Java is consistent throughout the language. It is, therefore, appropriate to conclude that Java is a highly impressive and readable programming language.
Unlike Java, C++ readability is not very impressive due to the numerous inconsistencies affecting the language coding rules. Also, C++ has multiple rules and an average of 84 keywords, which are relatively difficult to understand. Some of the keywords in C++ include goto, enum, break, struct, bitand, static, auto, alignas, static_cast, case, switch, if, for, explicit, false, delete, xor, volatile, using, union, true, float, bitor among many others. the rules used to handle exceptions in C++ are also not consistent and requires the programmer to cram when to use which rules and when not to use them (Foster, 2014). On an overall note, the readability of C++ is not as impressive as the readability of Java.
Python was explicitly designed to overcome the readability challenges of other programming languages, with Java and C++ being involved. Python has only 35 keywords that are relatively easy to understand and remember. Some of these keywords are true, and, as, del, from, print, continue, while, lambda, is, try, false, return, raise, import, nonlocal, not or, exec, break and many others (Van Rossum & Drake, 2011). All the keywords in Python are English words, meaning that it is easy for the programmer to remember them and understand their use.
Furthermore, Python has a reduced set of rules and syntactic exceptions. These characteristics make Python the most readable programming language among all the three programming languages.
Simplicity
Simplicity describes the simplicity of learning a language and understand how it works. The simplicity of the three languages can be gauged by developing a simple program such as the "Hello World" program. In Java, the program will be a three-part structure with the System.out part, it is printing the message on the screen. For a beginner or a novice programmer, understanding the three parts and their functionality can be challenging, thus making Java a complicated and challenging language to learn (McMaster & et al., 2017).
Similar to Java, developing a "Hello World" program in C++ involves a series of steps that can be difficult to understand. The numerous variables that one must declare in C++ complicate the learning process even further. In Python, the creation of the same program is much simpler since the print command is in natural language. To create a "Hello World" program in Python, the programmer only need to write 'print "Hello World!" this makes Python the simplest language to learn for beginners and novice programmers.
Portability
The portability of a programming language defines the language's ability to work in different processors and operating systems. The portability of a programming language relies on the abstraction between the system interface and application logic. The Java Virtual Machine (JVM) makes Java a highly portable programming language, meaning that software and systems developed using Java can run on any processor and in any operating system. Java provides abstraction in three distinct ways, namely OS portability, CPU architecture portability, and source code portability (Foster, 2014). Software produced using Java produces a source code called J-code that enables java codes to be run in any CPU architecture. The JVM means that java programs can be run in any system regardless of the underlying java compiler, Operating System, and CPU.
Since C++ is an integral part of all major operating systems, the programming language is highly compatible with most existing technologies and systems. However, different versions of C++ such as C++11, C++98, and C++03 use different compilers, which are not compatible with each other. as a result, C++ can be said to be less compatible compared to Java. On the contrary, Python has impressive portability and can be integrated with the major operating systems such as Windows, MacOS, and Unix.
Programming environment
The programming environment defines the friendliness of the editors used by the programming languages. Java relies on two Integrated development environments (IDEs), namely Eclipse and NetBeans. C++ can also use the two IDEs used by Java since they support extensibility and plugin support. The two IDEs enable the programmer to start new projects and debug the source code during development. The programs also trace errors during development and will allow the programmer to detect them more easily. Additionally, the IDEs provide visual editors that would allow developers to develop GUI more comfortably. Visual studio is the most common option for C++, and most programmers use this tool to develop C++ programs (Satav & et al., 2011). Visual studio, NetBeans, and Eclipse are open source programs, and developers do not have to pay a fee. Pythons have numerous programming environments and can use any environment stated above. Other programming environments used by Python include PyCharm that provides features such as a debugger, unit testing, code navigation, and code completion. Other Python programming environments support Web development with Web2Py, Mako, Flask, and Django.
Usage costs
Some of the costs associated with programming languages include development costs, training costs, marketing costs, program execution costs, and maintenance costs. The usage cost of a programming language is directly proportional to its easy to learn and understand. Python is the cheapest programming language to use since it reduces the costs associated with training, development, and maintenance of Python software programs (Foster, 2014). Since Java is more readable than C++, it is less expensive than C++. C++ is the most expensive language to use between the three programming languages. The usage costs are some of the significant aspects that organizations consider when selecting the language to develop their systems.
Chapter 5 Conclusion
Many in the software development industry continue to view Java as the most appropriate language moving forward due to its extensive applicability and portability. Also, many programmers are appealed to Java as the standard programming language for software development due to its advanced features. However, Oracle must strive to develop Java and release versions that are compatible with the requirements of the modern-day requirements such as data mining and the use of Artificial Intelligence and machine learning in the development of software. C++ continues to appeal to experienced programmers as the most effective programming language to execute large programs and native coding, such as the development of complex video games. On the other hand, Python is attracting young developers who are passionate about exploiting modern-day technologies fully. Python is surpassing its close competitors such as Ruby, and it is moving forward rapidly to emerge as one of the most applied programming languages across the globe.
Implication and Recommendation
Anyone within the software engineering industry can use the underlying programming principles identified in this paper to be able to pick the right language Learning a new programming language is relatively hard, especially if one does not have the skills to distinguish the best language for him or her to learn. Continued research should be conducted to help us identify how the three languages differ in terms of exploiting modern technologies such as Internet of Thing (IoT), Artificial Intelligence and Machine learning.
References
AlHeyasat, O., Abu-Ein, A. A. K. & Sharadqeh, A. A. (2012). Time comparing Java and C++ software.
Foster, Elvis. (2014). A COMPARITIVE ANALYSIS OF THE C++, JAVA, AND PYTHON LANGUAGES.
McMaster, K., Sambasivam, S., Rague, B., & Wolthuis, S. (2017). Java vs. Python coverage of introductory programming concepts: a textbook analysis. Information Systems Education Journal, 15(3), 4.
Ogala, J. O., & Ojie, D. V. (2020). COMPARATIVE ANALYSIS OF C, C++, C# AND JAVA PROGRAMMING LANGUAGES. GSJ, 8(5).
Oguntunde, B. O. (2012). Comparative analysis of some programming languages. Transnational Journal of Science and Technology, 2(5), 107-118.
Satav, S. K., Satpathy, S. K., & Satao, K. J. (2011). A Comparative Study and Critical Analysis of Various Integrated Development Environments of C, C++, and Java Languages for Optimum Development. Universal Journal of Applied Computer Science and Technology, 1, 9-15.
Van Rossum, G., & Drake, F. L. (2011). The python language reference manual. Network Theory Ltd..
Information Science and Technology
↳ Internet
A Brief Introduction to ACLs and VPNs and Their Functions in Computer Networks
A programming language can simply be termed as a set of semantics, syntax and strings that aid in solving a real world problem by use of a computer. There has been an evolution of programming languages since the first one emerged in the early 1950s. The pioneer languages such as machine language and assembly language can be referred to as the low-level languages while High-level languages are those that have English-like statements that can be relatable to humans. Examples of the high-level languages are; Java, Python, C#
Low-Level programming Languages
The low-level programming languages are known to be the earliest. They are referred to as low level because they address the internal circuitry of the hardware. Mostly these languages are complex to implement and a programmer may take more time to effectively learn them.
According to (Crary & Morrisett, 1999) the low-level languages do not need a compiler to convert to machine code hence this makes their compilation time faster compared to the high- level programming languages. The major disadvantage of the low-level programming languages is that they are not portable and can only address one machine. Examples of this languages are; machine language and assembly language.
High-Level programming Languages
The High-level programming languages emerged in the early 1960. The high-level programming languages are known to be scalable and portable, meaning, they can be run on another computer other than the one in which they were programmed in. They take more compilation time because they need the compiler to turn the source code into machine code. According to (Podkopaev et al., 2019) the high-level languages give integrative, parallelization, actual data processing capabilities. In his research, (Graunke et al., 2001) denotes that, “the high-level programming languages have been used in a wide range of technological fields such as; web application development, socket programming, data science, mobile computing, distributed computing and artificial intelligence.”
Conclusion
In conclusion, the high-level programming languages can be used in a variety of fields and can perform various tasks compared to the low-level programming languages. Even though there is a need of a compiler for the translation process, the high-level language are more powerful compared to the low-level programming languages. On the other hand, the low-level programming languages address the machine directly and are faster in compiling!
References
Crary, K., & Morrisett, G. (1999). Type Structure for Low-Level Programming Languages. In J. Wiedermann, P. van Emde Boas, & M. Nielsen (Eds.), Automata, Languages and Programming (Vol. 1644, pp. 40–54). Springer Berlin Heidelberg. https://doi.org/10.1007/3-540-48523-6_4
Graunke, P., Krishnamurthi, S., Van Der Hoeven, S., & Felleisen, M. (2001). Programming the Web with High-Level Programming Languages. In D. Sands (Ed.), Programming Languages and Systems (Vol. 2028, pp. 122–136). Springer Berlin Heidelberg. https://doi.org/10.1007/3-540-45309-1_9
Podkopaev, A., Lahav, O., & Vafeiadis, V. (2019). Bridging the gap between programming languages and hardware weak memory models. Proceedings of the ACM on Programming Languages, 3(POPL), 1–31. https://doi.org/10.1145/3290382
Information Science and Technology
↳ Modern Technology
Use of artificial neural networks to identify fake profiles
Introduction
Social media platforms are a place where everyone can remain in touch with their friends, share their latest news, and interact with others who share their interests. Online Social Networks utilize advantage of front side technology to enable permanent accounts in conjunction with getting to know one another in the. Facebook and Twitter are evolving alongside people in order to keep in touch with one another (Liu et al. 2018). Online activities welcome individuals with similar activities, making it easy for users to meet present pals. The online activities welcome individuals with similar activities together, making it easy for users to meet current pals. Gaming and entertainment websites with more followers usually have a larger fan base and higher ratings. Ratings motivate online account users to learn fresh strategies in order to compete better effectively with their peers.
Research Aim
According to studies, between 20% and 40% of profiles from internet artificial networks such as Facebook are fraudulent. As a consequence, the identification of false accounts through neural network sites leads toward a solution based on structures.
Research Objective
● To understand the application of neural networks on real life problems
● To know the implementation process of neural networks for fake profile detection
● To detect fake profiles on social media through python programming and neural networking algorithms
Research Question
● What is the major role of data identification?
● How to avoid a fake status on a neural network platform?
● What types of problems face an artificial neural network?
● How to create an image on an ANN neural network?
Research Significant
Implementation would be a strategy for classifying an item based on the classification collected data that has been used to build a classification model. Apart from that, the project is included in the advance process that helps in the implementation of the security function in the network site. Another project tried to solve this problem but the number of fake account profiles gradually increased. Therefore this project uses Python (Yang et al. 2019). As well as programing based is highly strongest. The categories from validation data are removed as well as left therefore for a knowledgeable classifier to determine. Using “Artificial Neural Networks” for detecting fake profiles throughout this study, researchers employ “Artificial Neural Networks” to determine if the profile data provided are from authentic or bogus individuals.
Theories and Models
Vector Machine
Some fraudulent profiles are used to spread disinformation and create agendas. The identification of a fraudulent account is critical. Machine learning-based technologies were employed to detect bogus profiles that might mislead users. The information is pre-processed using several Python libraries, as well as a comparison model being produced in order to find a viable solution that is appropriate for the current information (Hayes et al. 2019). Machine Learning Classifiers are used to detect bogus information through social media networks. For the identification of bogus accounts, the segmentation abilities of the methods Variational Land, Neural Network, as well as Support Vector Machines were employed.
Technology Acceptance Model
TAM is amongst the most prominent models of adoption of technology, including two main elements affecting an individual's inclination to employ new technology: considered facilitating conditions of utility. An elderly individual who views online content as too complicated to perform or a pointless exercise will be less inclined to adopt this advanced technologies, whereas an older person who views digital content as providing required mental stimulation including being simple to learn would be more probable to even want to understand about using online content.
Research Approach
This deductive approach may be described using hypotheses that can be generated from the theory's assertions. To put it another way, the deductive method is concerned about deducing results from assumptions or assertions. Therefore, the primary purpose of this study is to assess the consequences of fraudulent profile pages on Facebook (Altman et al. 2019). To accomplish this, researchers devised a thorough data harvesting approach, conducted a social engineering study, and examined connections between phony profiles as well as actual users in order to eventually undercut Facebook's economic model. Furthermore, qualitative methods are used to examine privacy concerns.
Research Design
Initially, the researcher constructed six bogus Facebook profiles across three age categories, half of which were female as well as half of which were male. Furthermore, they developed a profile that depicted an animated cat rather than a real person and revealed nothing identification information even really. They established additional fake profiles depicting a teenage girl who friended every one of the false accounts in their experiment to induce social connections (Zhou et al. 2019). Unlike the accounts used in previous sociable experiments, which attempted to keep the profiles as basic as possible, researchers produced realistic and complicated profiles to ensure a high level of social appeal. They did this by looking at the Seagate Labs online social study.
Data Collection Process
The bulk of businesses rely on data collection methods to forecast future probabilities and trends. After acquiring data, the data organization step must be finished. The secondary data collecting process is used to gather this study to the variable source. That should help with the implementation of the project. Apart from that, properly define the variable technique in the implementation of the software as well as the operational function which is helpful to the implementation of ANN.
Software Requirements
It is believed that once the study is completed, the project's comparability would increase dramatically. In this case, we may also leverage the contact with the client to clear up any uncertainty and determine which criteria are more critical than others. This procedure often includes multiple graphical representations about the processes, due to differential types of entities, including their relationships (Yang et al. 2018). The graphical approach may aid in the discovery of inaccurate, inconsistent, absent, or excessive requirements. Flowchart, entity- relationship visualizations, database design, state-transition diagrams, and other models fall under this category.
Analysis
In recent days social media is ruling the globe in a variety of manners. The amount of people utilizing digital platforms is rapidly expanding. The biggest benefit of digital social networking sites is that they allow us to link with people more readily and interact with them more effectively. This opened up a new avenue for a prospective assault, including phony identification, misleading data, and so on. The primary goal of this study is to identify fraudulent users. In this research, the "gradient boosting" method is utilized to effectively identify bogus users. The fact that digital networking sites are saturated with incorrect material and adverts makes it difficult to detect these bogus profiles.
The primary coding language used here is Python. This is employed to identify fake accounts in the dataset. This includes a variety of techniques including libraries which aid in the detection of fake profiles with extreme accuracy. Python and Python's standard libraries such as “Numpy”, “Pandas”, “Matplotlib”, “Scipy”, and “Sklearn” were utilized.
1. Uploading the dataset: The collection of examples has been a dataset as well as when operating with python or machine learning a few datasets must be needed for various purposes.
Training dataset: The dataset which is fed into the programming algorithm for training the model.
Testing dataset: The dataset which is employed to validate the model’s accuracy but has not been utilized for training the model might be called the dataset of validation.
2. Dataset pre-processing: This is an essential phase to identify fake profiles. In this phase, the data has been processed in an appropriate manner that could be inputted to detect the procedure. The useful data which could be derived from that directly influenced the capability of the model for learning. Hence, this is very vital to pre-process the information and the data before feeding this within the model. The dataset is widely used to demonstrate guided Python programming, which involves training a system to forecast the diagnoses. It shall be disregarded for the sake of showing uncontrolled detection methods.
3. The "random forest" is a model composed of several "decision trees". During training and testing the model employing the "Random Forest '' method, every tree derives from a randomized selection of the pieces of data, as well as the sampling selected with replacement are called bootstrap, where certain values are utilized in many instances within a particular tree.
4. To start such explorative study, load libraries as well as create methods to chart data using "matplotlib". Not all charts would be created based on the information.
5. Then the next step is to classify the algorithms. Based on the performing the algorithms the dataset will be run as well as will give an appropriate output regarding the issue. Correlation matrix, scatter plot, distribution graph and many other materials will be obtained as output. For a particular dataset, the overall efficiency of identifying fake profiles is shown to be greater when utilizing the "Random Forest Algorithm", trailed by the "Neural Networks Algorithm".
Future Work
The main issue has been that someone can have several Facebook accounts, which allows them to create fraudulent profiles as well as accounts inside online communities. The goal is to connect an Id card information when opening an account hence, that they can restrict the amount of accounts created and eliminate the possibility of fraudulent profiles anywhere at any time. The main issue has been that an individual can have several Online profiles, which allows them to create fraudulent profiles as well as logins in social networking websites (Echizen et al. 2019). Apart from that all steps are handled privately. That should help with implementation to the network site.
Conclusion
All sorts of controlled fake profiles in several platforms in literature review work. The critical evaluation also demonstrates that the python programming function that helps ANN may be adjusted using a variety of approaches. Artificial intelligence-based technologies are employed in a wide range of businesses, including computational linguistics, online databases, human speech interpretation, and image recognition. According to study guidelines, secondary resources must always be used while doing secondary research. They are not required to employ primary approaches unless the dissertation specifically requests that their findings be consistent with previously published research.
Reference List
Journal
• Lau, E.T., Sun, L. and Yang, Q., 2019. Modeling, prediction and classification of student academic performance using artificial neural networks. SN Applied Sciences, 1(9), pp.1-10.
• Shu, K., Mahudeswaran, D., Wang, S., Lee, D. and Liu, H., 2018. Fake Newsnet: A data repository with news content, social context and spatial temporal information for studying fake news on social media. arXiv preprint arXiv:1809.01286.
• Devlin, M.A. and Hayes, B.P., 2019. Non-intrusive load monitoring and classification of activities of daily living using residential smart meter data. IEEE transactions on consumer electronics, 65(3), pp.339-348.
• Torng, W. and Altman, R.B., 2019. Graph convolutional neural networks for predicting drug- target interactions. Journal of chemical information and modeling, 59(10), pp.4131-4149.
• Singh, J., Hanson, J., Paliwal, K. and Zhou, Y., 2019. RNA secondary structure prediction using an ensemble of two-dimensional deep neural networks and transfer learning. Nature communications, 10(1), pp.1-13.
• Hanson, J., Paliwal, K., Litfin, T., Yang, Y. and Zhou, Y., 2018. Accurate prediction of protein contact maps by coupling residual two-dimensional bidirectional long short-term memory with convolutional neural networks. Bioinformatics, 34(23), pp.4039-4045.
• Nguyen, H.H., Yamagishi, J. and Echizen, I., 2019. Use of a capsule network to detect fake images and videos. arXiv preprint arXiv:1910.12467.
• Oshikawa, R., Qian, J. and Wang, W.Y., 2018. A survey on natural language processing for fake news detection. arXiv preprint arXiv:1811.00770.
• Bondielli, A. and Marcelloni, F., 2019. A survey on fake news and rumour detection techniques.
• Information Sciences, 497, pp.38-55.
Information Science and Technology
↳ Computers
My Journey Learning PHP in Web Development - Reflection Paper
As a Web Development major, I have learned to appreciate the exhilarating moment when my mind engages a new language. Learning PHP has been an experience, to say the least. There were a few times where I incorporated JavaScript, whereas that is a language that I am comfortable with, and I guess by default, that is typically what I code. In the beginning of the course, I was worried about what software I should use, and what would be acceptable, but thankfully I just had to click on the link that was provided, and it was smooth sailing from there. This course has been very informative, and unfortunately due to some inconvenient illness, I got pretty behind. I would suggest to any future student to make sure you do not get behind. Playing catchup can be very difficult, especially when you are a full-time student and working a full-time job. It is best to just stay on top of the workload, and if you run into any issues, just reach out to the professor, and hopefully work something out.
Throughout the duration of this course, there has been many up’s and downs. A few ups have been learning the capabilities of PHP. It is tremendous. I have taken Python, JavaScript, C# and ASP.NET and I am happy to add PHP to this list. I really enjoy learning a new language and comparing them to the other languages that I have learned. Now sadly we must take the good with the bad. I did not enjoy the day and time of the meet. I had to watch the meets recording the day after, or while doing my homework. I know it is hard to determine and set an appropriate time for the meet that works for everyone, which I am sure is difficult.
A few significant learning moments in this course was learning about PHP form Processes and how you can use $_GET, $_POST, or $_REQUEST superglobal arrays to access form data. I found this very interesting. I also enjoyed learning about the form processing workflow – Access the value in $_POST(carefully), check that the submitted value is valid, process the value(MVC) and then redirect or render a template. There was a video based on this process which was located in the week four videos section and the video went over an example which was very informative. I understood this process much better because of it.
Nevertheless, I had multiple ah ha moments. As mentioned before, I was very worried about what software we would be using. I was dreading the first homework assignment because it is always a hassle to get a grasp of things right in the beginning. I worried so much about the first assignment and then when I opened the link, I realized the professor included a starter link to the opensource site that we would be modifying. This was a great a moment for me, and as also a relief. Another ah ha moment was when I was completing homework ten. I was having issues with question two - Alice and Bob want to have a private (encrypted) email conversation. Explain how Alice can send a secure email to Bob using public/private key cryptography. I am not sure where my head was at during this, but I initially wrote something completely off the wall, and then I remembered something very similar being discussed in the weekly videos. I then located said video and the video touched based on a similar scenario. I was relieved that I remembered something so similar and was able to refer back to it and get full credit for that question.
Lastly, based on my experience and expectations, how would I characterize what I have learned in this course? For starters, I was nervous for this class. I took python last semester, I am more comfortable coding in JavaScript, and I code in SQL every single day for work. I was nervous that I would get too confused. Every language is different. I am very experienced in SQL, I like coding in JavaScript, and the last programming language I took was Python. It is a mess. I was nervous that I would code one line in JavaScript, the next in SQL, and so on so forth. I however think I did okay. On the first few assignments, I got some feedback that I was coding in JavaScript. Which I will admit did not surprise me, but once I got that feedback, I made sure that I was coding in PHP. I will say that I enjoyed the database section of the course. It was a break from programming language, so I just focused on the basic SQL. I work with ER diagrams, relational databases every day. I work in the IT Department for Genesis HealthCare as a Reporting Analyst. English Is our first language there, SQL is our second, and Clinical terminology is our third (I consider this another language because of how extensive it is). Now to characterize this all to what I have learned in this class, I would say I learned how to differentiate PHP with other languages and I learned that it is an open source language, which is always convenient. I created my Mother’s website using Bootstrap, which was self-taught, and if I would have known this language beforehand, I probably would have attempted to create her website using PHP frameworks.
In conclusion, I enjoyed this course. I got behind, and I will receive a low grade, but I actually enjoyed learning this language. The capabilities are impeccable and if my mother ever needs me to modify her website, I might just rebuild it using a php framework. I would not mind learning more about the language, and configuring something using PHP and maybe the ah ha moments do not have to stop because the course ended. Thank you for everything Professor Whitney.