Information Science and Technology
↳ Modern Technology
Machine Language Learning
Machine learning is a technique of data analyses used by computer scientists and programming experts to automate analytical model building. It is a special area in artificial intelligence based on the idea that computers can identify data patterns, learn from them, and make appropriate decisions with minimal human assistance (Holzinger, 2016). This paper describes and evaluates machine learning processes and techniques used in modern computer systems and automated machines.
Labeled vs. unlabeled data sets
Labeled data comes with a label, while the counterpart does not. Supervised learning uses labeled data because it contains meaningful tags used modeling. On the other hand, the unlabeled dataset bears natural and human-created artifacts that can be used by unsupervised learning.
Supervised Machine Learning
Supervised learning is common in various applications in modern-day computing, including text processing, image recognition, recommendation systems, and several others. This kind of machine learning is characterized by using labeled datasets to train the algorithms and subsequently class data or make an accurate prediction of the outcomes. The input data is fed into the model, which then adjusts the data weights using a reinforced learning process to ascertain that the model is appropriately fitted. A vast majority of organizations use supervised machine learning to address real-world problems at a scale. For instance, the ability of an email application to classify a message as spam and put it in a different folder aside from the inbox is one capability of supervised machine learning (Holzinger, 2016). Generally, this learning language can be used in business organizations to eliminate the manual classification of work and predicting the future using the labeled data. Human expertise and intervention are needed to evade potentially overfitting data models when formatting the machine learning algorithms.
Supervised machine learning uses neural network tools that train the model by mimicking the interconnectivity of the human through layers of nodes. Every node has inputs, weights, thresholds, and output. According to Sidey-Gibbons et al. (2019), supervised machine learning uses a training set to teach models that eventually yield desired system functionality. The training dataset comes with inputs and correct outputs that the model has to learn for a specified period.
The language is also endowed with algorithms that measure its accuracy using the loss function, which adjusts until errors are minimized to zero levels.
Supervised learning can either be classification or regression, depending on the nature of data mining. Classification uses an algorithm to assign test data into specific categories (TensorFlow, 2019). This process identifies entities in a dataset and makes conclusions on how the model can label or define those entities. Some of the common classifications include decision trees, random forest, and support vector machines. On the other hand, regression defines the relationship between dependent and independent variables. The technique to estimate projections such as the sale revenues and commissions for the business. Popular regression models include polynomial regression, linear and logistical regressions.
Supervised machine learning used the Scitik-learn tool, whose development uses the Python programming language (TensorFlow, 2019). The tool is very useful in data mining and analysis. It also provides models and algorithms for classification and regression, which are the categories of supervised learning. The tool is easy to understand and learn, and most of its parameters are flexible to change for any algorithm while calling objects.
The rationale for Selecting an Analytic Tool
The choice of an analytic tool for machine learning depends on several factors that serve as the rationale for the ultimate decisions over-analytical tools. One of them is the business objectives in relation to the cost of acquiring the tool. User interface and visualization are often another crucial consideration. Scikit-learn as one of these tools is often preferred because of the ease of use and higher user interface. It also has advanced analytics that allows it to recognize data patterns and predict future trends and outcomes. It is also flexible in that it allows standalone solutions and the integration of other technological capabilities.
Machine Learning Process
The machine process is all about using tags and training the machine to learn those tags.
For instance, in training image recognition, the expert would need to tag photos of natural features such as lakes, rivers, forests, and mountains with appropriate names. This exercise is data labeling. When the user is working with machine learning text analysis, he/she would feed the text analysis model with text training data and then tag it, depending on the nature of the analysis being done (TensorFlow, 2019). For sentimental analysis, customer feedback, for instance, would be fed into the model and then train the model by tagging each comment as neutral, positive, or negative.
Generally, the machine learning process involves three steps. The first step involves feeding the machine learning training input. In this case, it could be customer reviews and feedback from customer service data and social media. The second step is tagging the training data with the desired output. For the case of the business-customer relationship, the sentimental analysis model would be told whether the customer review is positive, negative, or neutral. The model then transforms the training data into text vectors representing data features (Holzinger, (2016). The third step involves testing the modes by feeding it testing data. Accordingly, algorithms are trained to associate feature vectors with tags using the manually tagged samples and make predictions when handling and processing unseen data.
Uses of Machine Learning in Healthcare
Machine learning has several applications in healthcare and has been useful in meeting the growing needs of medical demands and improving operations at lower costs. For instance, at the bedside, machine learning innovation can assist healthcare practitioners in detecting and treating diseases more efficiently and with more precision and personalize care (Dai et al., 2015). Generally, this innovation has revealed how technology can yield holistic care strategies to improve the quality of care and subsequent patient outcomes.
One of the complex machine learning that mimics the functioning of the human brain is currently being used in radiology and medical imaging. Deep learning uses neural networks to detect, recognize and analyze cancerous lesions from images (Dai et al., 2015).
Machine learning in health informatics is also streamlining record-keeping through electronic health records. The use of artificial intelligence in EHR improves patient care, lowers healthcare and administrative costs, and optimizes healthcare operations.
Disease identification and diagnosis and medical imaging diagnosis are other areas of machine learning applications in healthcare practice. The machine learning algorithms can detect patterns associated with health conditions and diseases using information from thousands of healthcare records and existing patient data (Sidey-Gibbons et al., 2019).
Conclusion
Machine learning can either be supervised or unsupervised, and in most industry applications, supervised learning is preferred because it uses labeled data to make ideal predicting future events and outcomes. Additionally, supervised learning can be classification or regression whose difference is the nature of the output. Machine learning has been applied in many areas to improve the efficiency and speed of operations while lower errors and costs. In healthcare, machine learning has been used in medical and imaging diagnoses, treatment interventions, and healthcare records, and data management.
References
Dai, W., Brisimi, T. S., Adams, W. G., Mela, T., Saligrama, V., & Paschalidis, I. C. (2015). Prediction of hospitalization due to heart diseases by supervised learning methods. International journal of medical informatics, 84(3), 189-197.
Holzinger. (2016). Holzinger Group Welcome to Students. Youtube.com. Retrieved 5 May 2021, from https://www.youtube.com/watch?v=lc2hvuh0FwQ&feature=youtu.be.
Sidey-Gibbons, J. A., & Sidey-Gibbons, C. J. (2019). Machine learning in medicine: a practical introduction. BMC medical research methodology, 19(1), 1-18.
TensorFlow. (2019). Machine Learning Zero to Hero. Youtube.com. Retrieved 5 May 2021, from https://www.youtube.com/watch?v=VwVg9jCtqaU.
Information Science and Technology
↳ Computers
Knowledge and Skills Paper
Section 1
Chapter 7 of the Assigned Reading offers a very detailed description of text - mining one of its more common uses and sentiment analysis, all relevant to market analytics as well as decision support tools. As such, sentiment analysis would be a text mining derived, and otherwise, text mining becomes basically a data mining derivative. As textual data within organized databases rises throughout size or quantity instead of data, it is vital to recognize many methods that used obtain meaningful information from such a vast number of unstructured- data. Chapter 7 addresses text mining and provides readers with an appreciation of the need for text-mining. Text mining has been the extraction of knowledge mostly from non-structured (mainly text-based) sources of data (Avram, Gligor & Avram, 2020). Data mining is among the largest rising divisions of a BI- business intelligence sector or even industry, provided that a significant amount of information has been in text form. In particular, this chapter gives strong differences between text mining & text analytics, and data mining. The numerous main applications of text mining were given as well.
Text mining technologies span nearly every field of industry and government, like advertising, banking, healthcare, pharmacy as well as homeland protection or even security. The method for carrying out such a text- mining plan has also been clarified throughout this chapter. Text mining utilizes natural language procedure to organize the set of text and also utilizes data mining techniques including such sorting, association & clustering as well as sequence discovery for extract knowledge from all of this. - A standardized approach closes to a CRISP-DM approach of such data - mining includes the efficient implementation of text- mining. Chapter 7 introduces and describes the concept of sentiment analysis (Avram, Gligor & Avram, 2020). As an area of research, sentiment analysis becomes strongly connected to natural language, quantitative linguistics, and text mining. It could be used to optimize search outcomes created by search engines. Sentiment research aimed to address the question of -
What do people are feeling about a certain topic? by utilizing a number with automated methods to dig through the thoughts of many (Avram, Gligor & Avram, 2020). This chapter enhances awareness regarding popular sentiment techniques as well as allows readers to understand the popular techniques of sentiment methods or analysis.
Chapter 8 is more about Web mining as well as its fields of use. As they understand, one of the fastest developing BI and (BA)- business analytics innovations become Web mining. This chapter discusses search engines as well as Web analytics, social analytics and the supporting tools, algorithms, as well as developments within the umbrella for Web mining. This chapter presents and identifies Web mining as well as describes its taxonomy as well as its fields of use. Web mining could be described as exploration & analysis for web, and generally web-based resources that provide useful information and interesting. Provides a difference between Web structure mining as well as Web mining material. Describes the concept of such Internet search engine internal parts.
Chapter 8 explains the theory of a Web analytics sophistication model, as well as its usage cases, also offers details on (SEO) - search engine optimization. SEO is the deliberate practice that influences the exposure of a normal search engine outcomes of an e-commerce site or even a website in such a search engine. A research framework is really a formal representation of such a company practice's essential dimensions as well as the levels of competency. Social networks as well as social analytics and otherwise their practical uses are explained in this chapter. This chapter allows the readers to consider & utilize the idea of social network analytics for deeper engagement with consumers. The tracking, review, evaluation, and perception of digital relationships & interactions among individuals, subjects, thoughts, & content are social analytics. Analytics of social networking relates to the comprehensive & scientific forms of accessing the large amount of content generated through web-based or dependent social media outlets, instruments, and strategies to boost the efficiency of even an organization.
Chapter 9 of the Specified Reading with Title “Model-Based Decision Making· Optimization and Multi-Criteria Systems” Identify selected approaches used throughout prescriptive analytics. The aim of this chapter isn't really necessary for all the aspects of modelling as well as analysis to be mastered. Instead, the material appears geared towards gaining familiarity with the significant principles as they apply to DSS as well as their application in decision-making. This chapter presents the core principles of modelling analytical actions as well as explains how prescriptive models deal with data and also the consumer. Within DSS, models contribute significantly if they're used to identify actual decision-making scenarios. Several kinds of models exist. Models may be static (for example, a single scenario snapshot) or otherwise dynamic (e.g multiperiod). Research shall be performed on the basis of presumed certainty (which would be more desirable), risk or even uncertainty (which will be the least desirable).
Furthermore, Chapter 9 explains how spreadsheets could be used for modelling as well as solution analytics. There are several capabilities for spreadsheets, such as what analysis, target finding, programming, optimization, control of databases and simulation. Decision tables & decision trees could model basic decision-making issues and solve them. This chapter explains how a linear programming model can be organized and also how different objectives can be treated. Describe what is implied by sensitivity analysis, what-if evaluation, and otherwise, LP goal-searching has been the most popular mathematical programming tool. Within operational constraints, it seeks to find an effective distribution of available capital. The goal feature, the decision factors, as well as the limitations are also the key components of such an LP model. The key problems in multi-criteria decision-making are illustrated in this portion. Decision-making challenges involving several parameters are difficult, but it's not impossible to overcome.
Chapter 10 of an Assigned Reading proceeds to discuss certain specific topics relevant to a model base, one of the core elements of DSS- decision support systems. This chapter discusses the fundamental principles of, and where to use, simulation & heuristics. This chapter helps us recognize how search techniques have been used to overcome such types for decision support and also to understand the principles behind the genetic- algorithms and their implementations. The simulation would be a widely used technique to DSS requiring experimenting with such a model which illustrates the actual circumstance of decision- making. The simulation could manage conditions that are more complicated than optimization, and that does not ensure an optimum solution. Many various ways of simulation exist. systems dynamics modelling, those that are essential to DSS involve discrete event simulation, system dynamics modelling, Monte Carlo simulation, and agent- based simulation. VIS/VIM helps decision-makers to engage explicitly with such a model as well as displays results in such a way that is readily understood. The distinctions between algorithms, blinded search, as well as heuristics are described in this chapter, and the principles and implementations of various simulation styles are explained. In particular, this segment summarizes what's been described by Monte Carlo, agent-based modelling, system dynamics, as well as a discrete simulation for events & describes the main model management problems.
Section 2
Text Mining Tools and Vendors
Text mining (also recognized as text analysis) is also an automated method of translating unstructured text through simple as well as meaningful information. To remove organizations & sort texts through topic, sentiment, purpose, urgency, and much more, it could be used. Equipped for NLP- Natural Language Processing, text mining techniques are being used to evaluate all forms in the text, including survey comments as well as communications to tweets as well as product feedback, helping companies obtain insight & make data-based decisions. There are all the top 6 providers of text analytics to always be aware of:
Microsoft Text Analytics API
Clarabridge
RapidMiner
Lexalytics
MonkeyLearn
IBM Watson
They would analyze the visualization platforms and key NLP (natural language processing) engines to analyze each supplier's offers. They're also going to look at certain feedback from customers and equate certain feedback with organizational communications. As seen in Figure 1, Text Mining Methods could be divided into three groups.
Figure 1: Types of Text Mining Tools
Proprietary Text Mining Tools: Such tools were company-owned proprietary text-mining techniques. This is necessary to purchase certain devices for using them. While demo or trial models are free of cost and have restricted features, they are usable.
Open-Source Text Mining Tools: Such tools, as well as the source code, is accessible at no cost and could also lead to growth.
Online Text Mining Tools: They may operate these resources from within the websites themselves. There is only a web browser needed. In specific, such devices are basic and have minimal functionality.
As the importance of such text-mining has been understood by many more companies the number of digital applications provided by technology companies & non-profit organizations is also growing. The follows are among the common tools for text -mining, which they identify as proprietary software applications and free software applications.
Commercial Software Tools Many of the most common software tools used during text mining were listed below. Notice it on the web sites, several businesses provide demonstration models of their products.
ClearForest includes text analysis & resources for visualization.
IBM provides SPSS Modeler toolkits of data & text analytics.
Megaputer Text Analyst includes free-form text, description, clustering, and navigation, natural language extraction semantic analysis for searching dynamic refocusing.
SAS Text Miner offers a rich range of text-mining & analysis resources.
KTC- KXEN Text Coder provides a text analysis solution for automated preparation & transformation of unstructured text attributes it into standardized representation to be used in the KXEN Analytical Framework.
The Statistica Text Mining engine offers excellent visualization capabilities for simple-to-use text mining features.
VantagePoint delivers a range of interactive graphical perspectives and investigative methods with effective capabilities for discovering text archive knowledge.
The Provalis Study WordStat analysis module evaluates textual content, including such answers to open-ended questions, interviews, respectively.
Clarabridge text mining platform delivers end-to-end applications for practitioners with customer engagement who choose to convert customer input into marketing, operation, and quality enhancement.
Free Software Applications Free software tools, many of which are open access, are accessible from a range of non-profit companies:
RapidMiner, one of most common free, open-source data-mining & text-mining software applications, is customizable with such a graphically pleasing user interface which is a drag - and - drop.
Open Calais is also an open-source toolkit for the blog, content management framework, platform, or program to provide semantic features.
GATE is a major open-source text mining toolkit. This has a free open-source platform as well as a graphical production environment.
LingPipe has become a suite of Java applications for human language linguistic study.
S-EM (Spy-EM) is a method of text identification that utilizes constructive as well as unlabeled instances to understand.
Vivisimo or Clusty is also an application for site search & text-clustering.
Web Mining Tools and Vendors
Web mining has become a computer program that utilizes data mining strategies to find or uncover trends from massive data sets. There are three fields of web mining: web usage mining, web content mining as well as web structure mining. Any of the useful methods for web mining are below.
R: R is a free language / environment of statistical computing & graphics. Scripting languages such as Ruby, Python, Perl and so on have rendered it available.
Octoparse: It is an easy yet strong method for web data mining which simplifies the processing of web data.
ProWebScraper: ProWebScraper is also an amazing mining & web scraping application for web content.
Weka: Weka is a set of algorithms which can be used for different tasks relevant to data mining. It requires different data classification tools, planning, clustering, regression, visualization, and much more.
Majestic: Majestic is also an extremely efficient mining tool for web structures which is used in business analytics. It offers web-based link-investigation, Search Engine Optimization techniques, and much more.
SimilarWeb: SimilarWeb is also another web-based mining & market intelligence platform. It empowers companies to make better decisions by using its online mining capability.
Tableau: Tableau provides a family of BI-focused digital data visualization goods.
Scrapy: Scrapy is also an open-source platform for website data analysis. This is written in Python and the rules for extracting site data could be written.
Web as well as web use has been continuing to grow, therefore the ability to examine web data and derive all sorts of useful knowledge from it is also growing. A web-enabled digital business may be assisted through web mining strategies to optimize marketing, customer service and sales activities. Web mining implementation is related to the rapid development of the (W.W.W)- world wide web; throughout the field of web science, web-mining is a very popular and common subject. For E-Commerce & E-Service website, web mining often plays a key role in recognizing the use of their websites and services and offering improved service for both consumers and users. E-Learning, e-government, digital libraries, mobile trade, surveillance and crime investigation, as well as electronic enterprise are only a few applications.
For decades, the United States is already utilizing models to explore the capacities of its military powers and prepare soldiers to conduct their missions efficiently. Even so, this equipment will move out of the training programs and then into the operations throughout the fight on terrorism to assist the government as well as its allies in countering this latest some kind of war and strengthen homeland security. Although today's databases provide a wealth of details regarding multiple facets of the activity of an adversary, compiling this information in such a multidimensional format will help decision-makers face the demands of today as well as foresee unknown yet looming threats. The application of all forms of intelligence and military assets, like personal computer systems for data management, sifting and correlation, would be needed to cope with this new challenge. This may involve the introduction of interactive models of processes, simulations of military interaction and computational war games which handle uncertainty as well as discover interactions and context within data which is scattered through several contexts and distributed through a continuum consistent with causation.
Models as well as simulations are just like databases if used as analytical as well as decision support resources that adjust automatically with reaction to relationships among new and old information. Although a database would be a way to organize, preserve and scan for records, through one moment to the next, a simulation is really a versatile method for rearranging, mixing, modifying and testing new data combinations. This makes it a theoretically invaluable instrument focused on historical and current incidents as well as circumstances to anticipate possible terrorist acts. The technical simulation group has not based its energies or creativity heavily on even a full analysis of a risk of terrorism. As a consequence, models which capture all components of terrorism and search for warning signals of potential acts are required (Palanisamy & Liu, 2018). There are combat templates which allow Special Forces can rehearse attacks upon terrorist strongholds. Models for intelligence become accessible that describe the physical & logical interactions between participants of such a terrorist cell. What's really lacking, though, is a digital paradigm that combines the threat's military, fiscal, political, protection and legal aspects.
They have a range of Genetic Algorithm software resources, like Matlab Toolbox, GPDotNet, JGAP, or they can write the own code, respectively. Any of the other methods for genetic algorithms would be as follows:
JCLEC: Evolutionary Computation for Software Framework.
IlliGAL software: Illinois Genetic Algorithms Laboratory.
ECJ 16: An Evolutionary Computation Research Framework based on Java
A category of machine learning to describe and solve complex problems is genetic algorithms. For a wide range of applications, they include a collection of effective, domain- independent search heuristics, such as the following:
Dynamic Control of Process.
Induction of rules optimization
Identification of new topologies for networking (Example neural network design and neural computing connections).
Simulation of behavioral as well as evolutionary biological models.
Complex architecture of engineering systems.
Pattern identification.
Scheduling.
Routing & transportation.
Design of layout and circuitry.
Telecommunication.
Graph-based concerns
A genetic algorithm interprets information that can help it to reject as well as collect strong solutions that are inferior, and thereby learns about its universe. For parallelization, genetic algorithms to are appropriate. Since the kernels of the evolutionary algorithm are fairly easy, it isn't hard to write programming codes to execute them. Software packages are required for improved performance (Palanisamy & Liu, 2018). In particular, online demonstrations are offered in a range of commercial bundles. Microsoft Solver as well as XpertRule Genisys, an ES shell with such an integrated genetic algorithm, provide representative commercial sets. To handle complicated optimization issues in financing, scheduling, development, and so on, it utilizes a genetic algorithm.
Section 3
This paper offered me a lot of understandings about text mining, web mining, models, social analytics, optimization principles, simulation, sentiment analysis and heuristics. The semi-automated method of extracting trends (helpful information & knowledge) as well from vast numbers of unstructured data sources is text mining (as well recognized as text data mining or even knowledge exploration within textual databases). Data mining was its mechanism through which valid, new, potentially valuable and essentially understandable trends were found in data contained in hierarchical databases, where data is ordered through categorical, ordinal or even dependent variable in documents (Palanisamy & Liu, 2018). Text mining is the same as data- mining because it has the same intent and otherwise utilizes the same methods, except with more text mining, a series of unstructured (or even less structured) data files including such PDF files, Word documents, text extracts, XML files, etc. is the input to an operation. Sentiment assessment is a technique that utilizes vast quantities with textual sources of data to identify favorable as well as unfavorable views regarding specific products & services. While database mining (or even web data mining) seems to be the method of discovering as well from web data inherent relationships (i.e., important and valuable information) conveyed throughout the form of textual specifics, connections or user information.
Social analytics involves mining the textual data derived through social networking (e.g. emotion analysis, natural language) and the assessment through publicly developed networks (e.g. influencer recognition, sampling, prediction) to obtain insight about current and new clients' present and future habits, and into the perceptions and dislikes of a company's goods and interests. In certain DSS models, modelling would be a central aspect and a prerequisite in such a model-based DSS. There are several model groups, but there are also many specific methods to solve each one. A popular modelling technique is a simulation, and there are many others. It could save millions of such dollars or otherwise produce thousands of dollars in sales by adapting models to real-world scenarios. Heuristics really are the informal, judgmental awareness that constitutes the laws of good decision throughout the field of such a field of operation (Palanisamy & Liu, 2018). They lead the problem-solving method via domain knowledge. The method of utilizing heuristics throughout problem-solving becomes heuristic programming. Genetic algorithms (GA) are part of the international search techniques used by conventional optimization strategies to find possible answers to optimization-type problems which are too difficult to solve. Simulation is reality's presence. Simulation is a methodology in MSS for performing experiments (for example, what-if analyses) with a machine on even a management system model.
References
Avram, C., Gligor, A., & Avram, L. (2020). A Formal Model Based Automated Decision Making. Procedia Manufacturing, 46, 573–579. https://doi.org/10.1016/j.promfg.2020.03.083.
Palanisamy, R., & Liu, Y. (2018). User Search Satisfaction in Search Engine Optimization: An Empirical Analysis. Journal of Services Research, 18(2), 83–120.
On Thermodynamic Technologies: A Short Paper on Heat Engines, Refrigerators, and Heat Pumps
Thermodynamic processes that occur spontaneously are all irreversible; that is, they proceed naturally in one direction but never reverse. A rolling wheel across a rough road converts mechanical energy into heat due to friction. The former is irreversible, just as it is impossible that a wheel at rest would spontaneously start moving and getting colder as it moves instead.
In this paper, the second law will be introduced by considering several thermodynamic devices: (1) heat engines, which are partly successful in converting heat into mechanical work, and (2) refrigerators and heat pumps, which are partly successful in transferring heat from cooler to hotter regions.
Heat Engines
The essence of our technological society is the ability to utilize energy resources other than muscle power. These energy resources come in many forms (e.g. solar, geothermal, wind, and hydroelectric). But even though we have a number of them available in the environment, most of the energy used for machinery comes from burning fossil fuels. This process yields heat, which then can be directly used for heating buildings in frigid climate, for cooking and pasteurization, and chemical processing. But to operate motors and machines, we need to transform heat into mechanical energy.
Any device that converts heat partly into mechanical energy or work is called a heat engine. They absorb heat from a source at a relatively high temperature, i.e. a hot reservoir (like combustion of fuel), perform mechanical work, and discard some heat at a lower temperature (Young & Freedman, 2019). In correspondence to the first law of thermodynamics, the initial and final internal energies of this system are equal when carried through a cyclic process, as in
Fig. 1 Schematic energy-fiow diagram for a heat engine
Thus, we can say that net heat flowing into the engine in a cyclic process is equal to the net work done by the engine (Brown et al., 2017).
We can illustrate how energy is transformed in a heat engine using the energy-flow diagram (Fig. 1). The engine itself is represented by the circle. The amount of heat QH supplied to the engine by the hot reservoir is directly proportional to the width of the incoming “pipeline” at the top of the diagram. The width of the outgoing pipeline at the bottom is proportional to the magnitude |QC| of the heat discarded in the exhaust. The branch arrow to the right represents the portion of the heat supplied that the engine converts to mechanical work, W.
When an engine repeats the same cycle over and over, QH and QC represent the quantities of heat absorbed and rejected by the engine during one cycle; QH is positive, and QC is negative. The net heat Q absorbed per cycle is
= + =||−||
The useful output of the engine is the net work W done by the working substance. From the first law,
= = + =||−||
Ideally, we would like to convert all the heat QH into work; in that case we would haven QH = W and QC = 0. Experience shows that this is impossible; there is always some heat wasted, and QC is never zero. We define the thermal efficiency of an engine, denoted by e as the quotient
The thermal efficiency e represents the fraction of QH that is converted to work. To put it another way, e is what you get divided by what you pay for. This is always less than unity, an all-too-familiar experience! In terms of the flow diagram of Fig. 1, the most efficient engine is one for which the branch pipeline representing the work output is as wide as possible and the exhaust pipeline representing the heat thrown away is as narrow as possible.
When we substitute the two expressions for W given by Eq. 1.2 into Eq. 1.3, we get the following equivalent expressions for e:
Fig. 2.1 Schematic energy-flow diagram for a refrigerator
Refrigerator and Heat Pump
We can understand the mechanism of a refrigerator as opposed to a heat engine. As explained in the first part, a heat engine takes heat from a hot reservoir and gives it off to a colder place. A refrigerator operates in reverse, i.e. it takes heat from a cold place (inside of the refrigerator) and gives off that heat into a warmer place, often the surrounding air in the room where the refrigerator is located. In addition, while a heat engine has a net output of mechanical work, the refrigerator requires a net input of mechanical work (Poredoš, 2021).
Fig 2.1 shows an energy-flow diagram for a refrigerator. From the first law of thermodynamics for a cyclic process,
+−=0 −=− or because both QH and W are negative,
||= +| |
It only shows that the heat |QH| given off from the working substance and given to the hot reservoir is always greater than the heat QC taken from the cold reservoir.
From an economic point of view, the most efficient refrigeration cycle is one that takes off the greatest amount of heat |QC| from inside the refrigerator for the least use of mechanical work, |W|. The relevant ratio is |QC|/|W|, called the coefficient of performance, K, which implies that the larger this ratio is, the better the refrigerator.
A variation on this is the heat pump, which functions like a refrigerator, but turned inside out. A heat pump is used to heat buildings by cooling the air outside. The evaporator coil is placed outside, as it takes heat from cold air, while the condenser coils are inside, which gives off heat to the warmer air. In this design, the heat |QH| taken inside a building can be considerably greater than the work |W| needed to get it there.
Conclusion
In the bottom line, it is impossible to create a heat engine that completely converts heat to work, i.e. 100% thermal efficiency. It only corresponds to the second law of thermodynamics which states that it is impossible for any system to undergo a process in which it absorbs heat from a reservoir at a single temperature and converts the heat completely into mechanical work, with the system ending in the same state in which it began. Heat flows spontaneously from hotter to colder objects, never the reverse. A refrigerator does take heat from a colder to a hotter object, but its operation requires an input of mechanical energy or work. We can deduce that it is impossible for any process to have as its sole result the transfer of heat from a cooler to a hotter object.
References
Brown, T. L., LeMay, Jr., H. E., Bursten, B. E., Murphy, C. J., Woodward, P. M. (2017, January 1). Chemistry: The Central Science (14th ed.). Pearson.
Ozerov, R. P., & Vorobyev, A. A. (2007). 3 - Molecular Physics. Physics for Chemists. (), 169–250. https://doi.org/10.1016/B978-044452830-8/50005-2
Poredoš, A. (2021, April 25). Thermodynamics of Heat Pump and Refrigeration Cycles. Entropy, 23(5), 524. https://doi.org/10.3390/e23050524
Young, H. D., & Freedman, R. A. (2019). University Physics with Modern Physics (15th ed.). Pearson.
Information Science and Technology
↳ Modern Technology
Information System Paper
System Definition
Human Resource Information System is a system that is meant to manage the employees’ information as well as facilitate management of operations of the human resource department.
The system was developed and deployed in 2016 with the aim of diversifying the activities of the human resource department. The system performs four significant roles. First, it enables the organization in its administrative role owing to the rising number of employees as well as the number of human resource roles. The system also helps the management with the recruitment as well as the retention process. It enables the organization to carry out job analysis to determine the departmental requirements in terms of the qualifications and expertise of the employees. Finally, the system is meant to improve the relationship among the employees creating a positive working environment in the organization.
System Analysis
The human resource information system is made up of six modules or components which perform specialized roles. First, it has a database meant to store all the information contained in the system. A database refers to a collection of data with a high level of consistency and minimal redundancy[Ram16]. The database of the human resource information system stores employee information and is accessible from different places anytime the users need it. Types of personal data contained in this database entail performance reports of the employees, emergency information as well as the history of compensation. However, this database is highly secured to prevent manipulation and alteration of the data which could alter its integrity. As a result, there is a database administrator responsible for running various queries in the database.
Secondly, the system has time and labor management capability. The purpose of this component is to simplify the management of employees’ time as well as labor. The component functions in such a way that employees can provide the number of hours that they have worked but with the approval of their supervisor. This allows for immediate verification of vacations by the managers as well as solves overtime issues. The system is developed in such a way that time reflects on the human resource manager’s end after the approval by the supervisor of an employee. Additionally, this module is meant to facilitate the human resource management in its role of tracking attendance as well as punctuality among the employees.
The third component of the human resource information system is the payroll module.
The primary importance of this module is to facilitate the remuneration of employees. The system operates in such a way that the human resource management department can download and upload the employee hours and also deposit payrolls and cheques to the employees when the time comes. It is also meant to facilitate payment of the salaried employees with minimal or no errors. Considering that payment of tax is crucial to any organization, the system performs tax calculations and makes the necessary deductions.
The benefits module is the fourth component. Employees enjoy other benefits other than their salary, and this entails medical benefits and retirement benefits. The human resource information system has made it possible for the human resource department to have a one-stop shopping experience. All the information is readily available to both the employees and the management allowing the employees to request for the health benefits and at the same time enabling the management to authorize these benefits.
The fifth module of the human resource information system is recruitment and retention module. The recruitment process of any organization is one of the most crucial processes. It is out of this process that all policies and systems originate. This module in the system enables the organization to identify, acquire, and also maintain talent. Applicants can apply through the system uploading their documents and recruitment done through the system. It is through this system that training in the organization is approved and carried out in the development of the human resource workforce.
One gap that is missing is the implementation of the referral mode of operation. This implies that although internal employees can refer their preferred candidate for recruitment, there is no way of recognizing that. In a referral system, it allows employees to refer their preferred candidates for the recruitment process[Sma15]. For a referral who makes it all the way to clinching the position, the employee who referred him or she is awarded points or given a gift.
The assumption here is that with an internal referral, chances of obtaining the desired skills are high relative to sourcing an unknown employee.
Tools and Technique Analysis
Tools that have been used to develop this human resource information system include; programming languages, text editors, and the database. Among the programming languages that have been used in the development of this system include; PHP, Python Django, HTML, and Bootstrap. PHP is an acronym that stands for hypertext preprocessor language. This is a server side language that is meant to improve interaction between the user and the database on the server[Dig17]. HTML stands for the hypertext markup language, and this is a language that is used to enhance the interface between the user and the system[Kri14]. Python, on the other hand, is an interpreted object-oriented language whose syntax is similar to that of PERL and has relatively simple syntax[Pau16].
Since this system is web-based, it is hosted on the WAMP server, and the database was developed using the MySQL. The reason for the adoption of these tools was their familiarity, clarity of syntax, and compatibility with other systems[Mar12]. To facilitate the computations and reusability of the software codes, the Python Django was adopted[Mar12]. For better interaction with the users on the client-side, JavaScript was used to develop user interactions with the system. The real interface on which the user interacts with the system was designed using HTML. Codes were written using the Sublime Text editor. Sublime text editor is one the text editors that have high performance although it has a sophisticated appearance[Rud14].
Scrum approach was used to develop the system — members of the development teams assigned each other roles and would meet on a regular basis after one week. To ensure that their collaboration was guaranteed and fast progress was achieved, they opted to use Trello. Trello is a collaboration tool that facilitates an easier way of organizing the progress of a project with a better visual capability effectively allowing better tracking of project progress[Rud14].
Conclusions and Recommendations
Owing to the fact that the system was developed using HTML, accessing it across multiple platforms such as tablets and phones has remained a daunting task. Therefore, there is the need to redesign the system using html5 so that it is presentable across multiple platforms. At the moment, the system is sometimes rendered unavailable when more people are accessing it.
This implies that scalability and stress were not considered during the development of the system. Although the Scrum approach was used in the development of the system, there is a need to ensure that an agile approach is adopted during the scaling of the system. There are rising concerns over systems’ security with many systems being compromised. Due to the fact that this system has been developed using open source tools, there is the need to migrate the database to the Oracle platform for better security and better performance. In summation, since the deployment of this system, operations in the organization have changed for the better. Human resource operations have been speeded up, employees’ relationship improved for the better, and more open and transparent recruitment and training process have been realized.
References
Digital Pugs. (2017, Mar 11). The importance of PHP web development. Retrieved from digitalpugs.com: https://www.digitalpugs.com/articles/the-importance-of-php-web- development.php
Elmasri, R., & Navathe, S. (2016). Fundamnetals of database systems, 7th Edition. New York: Pearson.
Gries, P., Campbell, J., & Montojo, J. (2016). Practical programming: An introduction to computer science using Python 3.6. New York: Pragmatic Bookshelf.
Jamsa, K. (2014). Introduction to web development using HTML5. Burlington: Jones & Bartlett Learning.
Johnson, M. (2012). A concise introdction to programming in Python. Boca Raton: Taylor & Francis Group.
Musngi, R. (2014, May 15). Essential tools for modern web development. Retrieved from developerdrive.com: http://www.developerdrive.com/2015/02/essential-tools-for-modern-web-development/
Smart Recruiters. (2015, June 17). Top 10 reasons why employee referral is so important. Retrieved from smartrecruiters.com: https://info.smartrecruiters.com/content/top-10- reasons-why-employee-referral-so-important
Information Science and Technology
↳ Modern Technology
Data Visualization Software Paper 2
Tableau and Power BI are some of the popular business intelligence (BI) and data visualization apparatuses capable of helping organizations especially hospitals and clinics in analyzing and presenting their data (information) for better analysis and decision-making (Milligan et al., 2022). Whereas the mentioned BIs have analogous objectives, they vary in terms of characteristics, proficiencies, price rating, and user experience. Below represent a chart comparing the benefits and challenges of adopting Tableau and Power BI.
Criteria
Tableau
Power BI
Benefits
This BI is highly robust and flexible data visualization (Jena, 2019).
Tableau is well-known for its effortlessness of data retrieval and exploration
Power BI affords a seamless integration with Microsoft tools like Excel, Azure, and Sharepoint (Lyon, 2019).
Power BI's "Q&A" feature permits users to ask queries in basic language and collect visualizations as responses increasing its
accessibility for simple users.
Challenges
Tableau can be Costly, particularly for minor business organization or individual users. Licensing costs can increase
quickly if the organization needs
Power BI presents a free version, but more advanced features need subscription which can escalate cost (Lyon, 2019).
Power BI can have restrictions when connecting
to a given data databases making it challenging
a Tableau need advanced features (Jena, 2019).
Data preparation is time- consuming procedure in Tableau. Cleaning and structuring data to satisfy the tool's necessities is challenging for complex datasets.
for health organization with various data ecosystems (Lyon, 2019).
While Power BI is very customizable, making wide customizations to need knowledge of “Power Query formula language (M)” or “DAX (Data Analysis Expressions),” which might be challenging for some individuals (Lyon, 2019).
Cost
Tableau is expensive, particularly for smaller businesses or users. Licensing costs cumulate quickly, specifically if the business or an individual requires advanced
features (Jena, 2019).
Power BI is a free version, nonetheless the more advanced features require a person or business to subscribe monthly which triggers more cost especially when new features are added (Lyon, 2019).
Licensing
Licensing for Tableau if according to the pricing model of the user (Jena, 2019). For example, the user can decide to purchase Tableau models such as Tableau Server, online, Tableau
prep, and Tableau desktop.
Licensing is according to the price of the models (features of Power BI). For instance, Power BI pro, Premium, and Embedded (Lyon, 2019).
Number of Users
Tableau is an extra scalable choice and can house a greater
Power BI is better suitable for small to mid-sized health teams due to its limited scalability for large
number of users, making it appropriate for small and large
health organizations (Jena, 2019).
healthcare teams (Lyon, 2019).
Tableau permits seamless integration with cloud services, SQL Server, Oracle, Amazon Redshift and Google BigQuery
(Jena, 2019).
Power BI is part and parcel of Microsoft ecosystem making it easier seamlessly integrates with SharePoint, Excel, Azure, and Dynamics 365 (Lyon, 2019).
Vendor
Tableau is recognized for its robust customer support. They deliver numerous support options like phone, email support and online knowledge platform (Jena, 2019)
Power BI belongs to Microsoft ecosystem benefiting from Microsoft's all-embracing support infrastructure (Lyon, 2019). Microsoft offers quality customer support such as online documentation, forums, and a community and users can get assistance and share their
experiences with Power BI.
Part Two
Based on the above analysis and presentation, I select Tableau as the best BI for our organization due to many reasons. For example, Tableau possesses excellent data visualization proficiencies. According to Milligan et al. (2020), Tableau endows an extensive variety of charts, graphs, and collaborative dashboards helping users and organization to communicate effectively and analyze data comprehensively. The idea is significant for the organization by enabling the organization to arrive at making informed decisions on clarity and visually attractive insights. On the other hand, Tableau possess a user-friendly interface which improves it accessibility to simple and technical users. The idea helps in attaining a quicker adoption across in the organization’s department (Arfat et al., 2020). In turn, the employees of the organization can be able to harness data power without extensive training or IT support. The data expands as the organization advances hence Tableau is appropriate because of its scalability which permits users to manage big datasets and accommodate or house forthcoming growth smoothly.
This scalability guarantees that the organization’s investment in data analytics gears are valuable in the long run. According to Cainas et al. (2021), Tableau endows robust data integration competences, allowing employees and organization in general to link to numerous data sources like databases, cloud platforms, and spreadsheets. This present an opportunity to the organization to consolidate and examine information from numerous sources, enabling a holistic view of the organization’s operations (Ahmad et al., 2020). Tableau supports progressive analytics and prognostic modeling through incorporations with statistical apparatuses such as R and Python which can empower the organization to discover profound insights and generate data-driven estimates to inform the organization’s strategic decisions (Ahmad et al., 2020). Finally, Tableau support easy collaboration on data schemes and share understandings with stakeholders.
References
Ahmad, H. (2020). Tableau for Beginner: Data Analysis and Visualization 101. Haszeli Ahmad.
Arfat, Y., Usman, S., Mehmood, R., & Katib, I. (2020). Big data tools, technologies, and applications: A survey. Smart Infrastructure and Applications: Foundations for Smarter Cities and Societies, 453-490.
Cainas, J. M., Tietz, W. M., & Miller-Nobles, T. (2021). KAT Insurance: Data analytics cases for introductory accounting using Excel, Power BI, and/or Tableau. Journal of Emerging Technologies in Accounting, 18(1), 77-85.
Jena, B. (2019). An Approach for Forecast Prediction in Data Analytics Field by Tableau Software. International Journal of Information Engineering & Electronic Business, 11(1).
Lyon, W. (2019). Microsoft Power BI Desktop: A free and user-friendly software program for data visualizations in the Social Sciences. Historia, 64(1), 166-171.
Milligan, J. N., Hutchinson, B., Tossell, M., & Andreoli, R. (2022). Learning Tableau 2022: Create effective data visualizations, build interactive visual analytics, and improve your data storytelling capabilities. Packt Publishing Ltd.
Information Science and Technology
↳ Modern Technology
Computer Science and Information Systems
Introduction
In this paper, our focus will be directed towards the relationship that exist between Computer Science, Information Systems and Information Technology. A fundamental distinction between the three fields will also be analyzed in this work. We will attempt to give details as to how the comprehension of Computer Science concepts is imperative for a successful career in Information Systems. Additionally, personal academic focus, acquired and to be acquired knowledge in relation to the field of studies will be assessed.
Computer Science
Computer Science discipline is more inclined to answer the “how” of computer applications. Thus, it focuses on operation system architecture and design, designing and building software, computer programming languages and developing effective methods to solve computer related problems. The fundamental part of Computer Science, is high usage of mathematical concepts and computer algorithms (Denning et al., 2017). There are diverse career pathways associated with Computer Science such as software engineers or programmers, java developers, database administrators and network engineers. Computer programming corresponds to devising ways to instruct computers through the adoption of processes, procedures known as algorithms to perform specific tasks. These measures or procedures are directed towards manipulating objects, numbers, images, sounds and graphical contents etc. to fulfil an expected result. A process akin to performing a magical art, cooking a dish, painting a house, building a craft work. In the field of computer programming, languages like Python, R, FORTRAN, C++, C etc. are employed to fulfil a programmer’s art or intuition. Database architecture or systems is another fundamental component of computer science. This is a sub-component which is more oriented towards the comprehension of a system dedicated to the conversion of large group of data into an abstract tool allowing users to customize inputs consistent with user expectations (Brookshear & Brylow, 2015). Another fundamental concept of computer science is network architecture or engineering, which is more inclined to how computers can be linked in order to share data or resources.
Information Technology
This is a discipline considered an offshoot of computer science, it encompasses the utilization of existing operating systems, software and applications to solve business or organizational-related problems. It is more inclined to constructing a building block to perform specific tasks. Information technology is more like a solution inclined tool destined to help achieve organizational or business related objectives. The fundamental composition of Information Technology encompasses topics such as introduction to operating systems, system security, introduction to databases, computer interfaces etc. Additionally, professionals in Information Technology are given the necessary training to tackle everyday computer demands or needs of every organization, including government units, healthcare units, schools and so on. Career pathways of information technology are enclosed within the circles of Chief Information Officer, Network Administrator, Information Technology Manager, etc.
Information Systems
Information systems denote systems and procedures used to create, manage, manipulate, distribute and disseminate information within an organization setting (Matlin, 1979). A thin line exists between Information systems and information technology in terms of their adoption.
However, the former employs information theory, social science, information technology as core
fundamental concepts with a career pathway geared towards various fields such as computer security, communications, Actuarial analyst, Business Analyst etc.
Inference
Computer Science is a discipline which explores the adoption of mathematical concepts to comprehend how computers work. It’s primarily concerned with software development hence knowledge in programming languages like Python, C##, Java etc. forms the core of the discipline. The field emphasize the usage of algorithms to instruct computer system to perform tasks leading to the solution of problems. Thus, the discipline gives details about the architecture and design of computer systems. On the other hand, Information Technology is dedicated to utilizing already existing technology to solve business and organizational-related problems.
Topics like database system, operating systems, system security are the fundamental concepts that underpins information technology. However, Information systems corresponds to all the procedures, systems and users employed in the creation, storage, manipulation of information within an organizational setting.
Taking into consideration the underlying principles that govern the three fields, brings to light a technological semblance among them. Thus, Computer Science, Information Technology and Information Systems requires computer systems to co-exist.
The understanding of computer science concepts is vital for a successful career in information systems since it addresses the underlying principles of how computers react the way they do. Also, an alternate dimension given to once career in information systems is linked to the understanding that computer science develops efficient ways to solve computer-related problems. Therefore, an insight into computer science concepts gives a fundamental perspective to information systems hence enhances effective problem solving.
Doctor of Information Technology is one challenging degree to achieve in the academic sphere, therefore, ignoring all forms of distractions is a priority in order to maintain focus on my studies. Owing to this, I have set up an event calendar with reminders to help keep track of very important events to effectively manage unforeseen contingencies. Also, a daily three-hour after work study policy is positively shaping my balance between school and work, thus, additionally helping me focus on my studies. Ultimately, two things that keep me focus is my passion for information technology and the final doctorate award.
Taking into consideration the enlisted core technological courses like principles of programming, operating system and network architecture, data modeling and database design, enterprise systems architecture etc., I am convinced an insight into programming languages like Python, R, C##, C++, C, FORTRAN etc. needs to be ascertained since I am versed in computer others like SQL, VBA and Hadoop which are more inclined to database management systems. Also, further studies into networking needs to be effected if a successful and smooth completion of the DIT program is to be attained.
References
Denning, P. J., Tedre, M. & Yongpradit, P. (2017). Misconceptions about Computer Science. Communications of the ACM, 60(3), 31-33. doi: 10.1145/3041047 Brookshear, J. G., & Brylow, D. (2015). Computer science: an overview. (12th ed.).
Matlin, G. (1979). What is the value of investment in information systems? MIS Quarterly, 3(3), 5-34. Retrieved from http://www.waldenlibrary.org/
Information Science and Technology
↳ Modern Technology
Annotated Bibliography on R and Python Programming Languages
Ozgur, C., Colliau, T., Rogers, G., Hughes, Z., & Myer-Tyson, B. (2017). MatLab vs. Python vs. R. Journal of Data Science, 15(3), 355-372.
This paper expounds on R, Python and MatLab effectiveness. The paper has been written in such a way that it is suited for teaching in colleges and universities or any other institution of higher learning. It explains the basic knowledge of all three programming languages while trying to explain who uses the languages mostly. It also gives a list of the advantages and disadvantages of each of the three programming languages that are R, Python, and MatLab. It further discusses the advantages of one programming language over the other and vice versa.
This paper further explains the utilization of the three programming languages, analysis of one over the other and the comparison of a programming language to other programming languages. The examples given in this paper are real-life examples and where the analysis processes are explained in every programming language. Before the author finished writing this paper, he introduces “new” statistical methods where R is now being compared with SAS and SPSS accompanied by good examples that help us who are the readers of this paper broaden our understanding in the field of Data Analysis, Data Mining, and Big Data. At the very end of this paper, the authors of this paper give their experiences on all three programming languages.
Matloff, N. (2011). The art of R programming: A tour of statistical software design. No Starch Press.
In this paper, all that concerns R programming is discussed fully starting from running R program, simulations and doing math in R all the way to installation and usage of R packages and libraries. It has only majored in discussing R programming language and how it can be incorporated with other programming languages like Object Oriented Programming (OOP).
How I can describe this paper in a few words is that the paper has all the information concerning the R programming language. It explicitly explains the debugging facilities used in R and how these facilities are used in this programming language. If the problem has been not knowing how to interface R programming to other programming languages, the problem is solved in this paper where it explains how to interface R with C and C++ programming languages. It also introduces a package where R can be used from Python. This is achieved by installing a package called RPy and following the RPy syntax as explained in this paper.
Chun, W. (2001). Core python programming (Vol. 1). Prentice Hall Professional.
The Author of this paper Wesley J. Chun has precisely explained all that is concerned with the python programming language including definitions, History of python, the features that python programming language has, how to obtain python and he also added the installation procedure of python. The syntax of python and the objects used in python are correctly mentioned and examples are given in this paper that help the readers of this paper deepen their roots in Python programming language. This book is meant for starters, intermediates, and experts of python programming language suitable for teaching at colleges.
In this book, there are advanced topics in part two where other advanced programming languages are incorporated and can be integrated with Python. The programming languages that are discussed in the advanced topics of this book include web programming, graphical user interface programming with TKinter, Multithreaded programming, Network programming among others. In summary, this is the best paper for starting up in learning and understanding Python programming.
Jones, O., Maillardet, R., & Robinson, A. (2014). Introduction to scientific programming and simulation using R. Chapman and Hall/CRC.
The motivation behind the writing of this book is the rapid growth of R programming, its development, and the application of this programming language in a number of fields. The book illustrates the application of R programming in different disciplines of the education curriculum. The book is helpful and of use to the developers and programmers who use the R programming language in developing software. It also helps those doing data analysis and statistics.
The authors of this book have majored in simulation using R programming starting from the introductory part of R programming and later in the inner topics, this book describes how simulation can be done using R programming. In addition, the book has explained some key terms in statistics and data analysis also explaining how R programming is used in these other branches of science. The summary of this book is that it gives a clear understanding of how R programming is used in Data Analysis and Statistics.
Chambers, J. (2008). Software for data analysis: programming with R. Springer Science & Business Media.
This paper starts with an introductory part of the principles and concepts used in R. After the introduction, the paper now goes deeper to explain the usage of R programming and its evaluation. The examples that are given in this paper help the statisticians in getting to know how to plot different plots using R programming. In this chapter, the packages and libraries are explained then a guide on how to find and install these packages is given.
The authors of the book did just end there, they further give real-world examples on the application of R programming in businesses and industries. They explain that R programming has many businesses in gaining huge profits due to the statistical approach of R programming that helps them learn the trends in the market and make a quality decision and at times predict the likeliness of events in the future using R programming tools and techniques.
Sanner, M. F. (1999). Python: a programming language for software integration and development. J Mol Graph Model, 17(1), 57-61.
This paper starts by giving a definition of what python is and a brief history of python programming language. In its introductory part, it adds the python extensions, how python performs and finally how python programming language functions as a tool for integration. In the part where python is explained as a tool for integration, the author explains how the functionality of python is being implemented and how advantageous this is on the independence of the platform.
This paper brings in a new term used in the python programming language called Wrapping. Quoting from this paper, “this is a process where an existing C, C++ or Fotran code is accessed from the python programming language.” These existing codes can be re- implemented using python. Wrapping a code happens when just re-implementing it does not make sense. A better example is given in this paper to explain how to wrap code and how this wrapping occurs.
Lutz, M. (2013). Learning python: Powerful object-oriented programming. " O'Reilly Media, Inc.".
This paper is quite interactive since it starts with a question and answers session. The questions in this section try to answer all the questions or rather the frequently asked question concerning python programming language. These questions are answered in a way that even a first-time learner in python can understand and familiarize himself or herself with this programming language.
In the preceding sections of this paper, the python programming language is now described in-depth with the different functions used in python discussed and very readable and understandable examples given. All that it takes to be a python guru is found in this paper and is a good paper in the journey of becoming a python programming language expert. In conclusion, this paper has exercises after every section that helps the readers of the paper put in practice what he/she has learned.
Fox, J. (2005). Getting started with the R commander: a basic-statistics graphical user interface to R. J Stat Softw, 14(9), 1-42.
The paper starts by stating how different R is from S-PLUS and how the statistical graphical user interface is not incorporated in R. What happens is that R programming language includes the tools used in building graphical user interfaces. I will rate this paper highly since it gives the screenshots of how to install R and the packages and libraries too. This guides the users are installing R for the first time in their computers.
In this paper, the audience that the author has focused on is the newbies of the R programming language and also the newbies of the R graphical user interface. With this paper, anyone can install and use R without any problem. The paper concludes by giving practical examples recommended for new users of R to run before starting to run huge programs in R. Therefore, for any newbies in R, this is the recommended paper for perusal.