Sunday, January 26, 2020

Definition Of Noise Pollution

Definition Of Noise Pollution The definition of noise pollution is can be elaborate as a type of energy pollution in which distracting, irritating, or damaging sounds are freely audible. Noise pollution contaminants are not physical particles, but rather waves that interfere with naturally-occurring waves of a similar type in the same environment. Sounds are considered noise pollution if they adversely affect wildlife, human activity, or are capable of damaging physical structures on a regular, repeating basis. In the broadest sense of the term, a sound may be considered noise pollution if it disturbs any natural process or causes human harm, even if the sound does not occur on a regular basis. 1.12 Measurement for sound Sound is transmitted in series through the air with the wave compressed. When it comes to sound, there are three terms that can connected with it, the strength, pitch or frequency. Strength calculated in units of decibels (dB). Decibel is a ratio expressed on a logarithmic scale. This logarithmic scale takes care of wide range of sound power, intensity and pressure. The decibel (dB) scale begins from zero, which represents the faintest sound, which is audible to a normal ear. Decibel (dB) is used in environmental noise pollution as a measure of sound power level, sound intensity level and sound pressure level. A decibel is a physical unit based on the weakest sound that can be detected by the human ear. It is named after Alexander Graham Bell, the inventor of the telephone. Our human ear sensitivity to noise in the range of 20 to 20,000. 1.13 Sources of noise pollution People living in urban city complained about the noise from automobile traffic, overhead airplanes and helicopters, leaf blowers, pneumatic drills, and neighbors who play their televisions and stereos much too loudly. Large urban areas are being inundated by unwanted sounds. These sounds or noises are disturbing, disrupting ongoing activities and peaceful interludes. One cannot concentrate on a work project if there is constant drilling at a nearby construction site. It is virtually impossible to enjoy a television program when overhead jets frequently drown out its sound. It is difficult to fall asleep if your upstairs neighbors stereo system is blasting away. There are many sources of noise pollution that created in urban areas. The sources in general may be stationary or mobile. The example of stationary sources is such as when use of loudspeakers on various occasions like festivals, elections, worships in temples, mosques and during advertisements, mining operations, use of bulldozers, drillers and dynamites to break rocks, household gadgets like vacuum cleaner, TV, radio, stereo, grinder, mixer. In the other hand, mobile sources can be classified in Transportation / Traffic noise (Road Traffic or Highway Noise), Industrial noise, Noise from construction work and also Neighborhood noise.   Noise has direct physiological effects to human such as hearing damage which can be including hearing loss and tinnitus, or ringing in the ears, as well as cardiovascular and hormonal disturbances. Indirect effects include sleep loss, interference with concentration and learning, mood changes and aggression, and social isolation. Noise pollution is also becoming huge problem for many animals. Their hearing for detecting predators, finding mates, establishing territory, and recognizing warning alerts. Unnaturally high levels of noise can damage their hearing and can also mask more subtle sounds that they need to hear in order to survive and reproduce. They may also react with a fight-or-flight response to artificial sounds such as aircraft noise, thereby using up valuable energy reserves to flee from a non-existent predator. If noise in urban area becomes too intrusive, animals may shift to a new territory or alter their migration patterns, which can create new complications for their mating and survival. Noise Pollution effects the environment. As we all known that plants are similar to human being. They are also as sensitive as man. There should be cool and peaceful environment for their better growth. Noise pollution causes poor quality of crops in a pleasant atmosphere. Aim To awareness people that lived in urban areas when surrounding with noise pollution. I would like to cultivate and make a mass awareness to urban population since the level of awareness on the noise environmental issues is very low in urban areas. This will make a step of prevention to aware the dangerous of noise pollution surrounding them and how they can avoid or decrease the level of noise and know the effects and causes related to noise pollution. 1.3 Plan of the Research The plan is to investigate the level of awareness of the society regarding the environmental issues particularly on noise pollution issue. What are the best actions that need to be taken to make the society/community is aware. One of the plans is creating the survey questions. It will be asked questions to test their personality and attitude towards the environment especially on noise pollution. By performing these tasks, we can identify the best strategy to get peoples attention on this serious matter. This is a few questions that will be discussed. 1.4 Research Question What are the effects of noise on human health How does noise affect babies and children? What are the most common sources of noise pollution? What problems does noise pollution cause for people? What problems does noise pollution cause for animals? What are the effects of noise pollution on the environment? What can I do personally to reduce my own noise pollution? How can we reduce the noise pollution? What are the precautions we can take to avoid noise pollution? Do you think public is aware of the noise pollution issue? How can we educate people to make them emotionally/spiritually aware in better ways such as storytelling, video art etc? Does urbanization affects/increase the level of noise? Do you think we can totally get rid of noise pollution in urban cities? Are you aware that there are rules and regulations regarding noise? Research for (digital record) location The other plan is to record some digital audio (sound) and visual (video) to show the main causes of the noise pollution. The planning location is such as Main town Highway road LRT / Commuter rail Factories Neighborhood around the town Entertainment place (club, pub) Construction in the town 1.5 Justification for the Research Noise is all around us, but that doesnt mean we should just placidly accept unhealthy and unsafe levels of noise. Exposure to loud or constant noises can contribute to or cause hearing loss or a reduction in your ability to hear some frequencies of sound. Noise can prevent us from sleeping, which in turn affects our health and mood. Noise can distract us from our work, leading to errors and their consequences. And noise can produce high stress levels by triggering the bodys flight-or-fight response, which in turn leads to high blood pressure. In short, noise is not just a nuisance; it is a real health issue. By doing this research, I may contribute and helping people living in the urban areas to give some awareness regarding noise pollution and at the same time will save our habitat and environment in urban areas. Throughout reading my research paper and artwork, it will create awareness and people will be able control the sound and decrease the level of the noise pollution in urban areas. Furthermore people in the cities also can avoid from getting some health effects regarding noise pollution. . 1.6 Methodology I will gather data from internet sources, magazines, journals and etc Documentary I will try to get documentaries and speeches by environmentalists. Interview I will try to find people from Malaysian Nature Society (MNS) to answer some questions via email. Survey I have prepared questions as my survey. Visual- I will get some information about noise pollution via audio video in spot location in urban city. 1.7 Scope Sample from MMU students aged from 18-30 years old. Public aged from 18-30 years old. 1.8 Outline of the Thesis Chapter 1 I will provide the introduction details of my research topic. Chapter 2 I will put up the works of the previous researchers have done theoretically and practically. Chapter 3 I will provide the details such as materials to collect data and the details of the investigation. Chapter 4 I will discuss the result that I gathered from chapter 3 and will provide full analysis of interpreting data in the form of tables, charts and figures. Chapter 5 I will answer all the questions stated in the chapter 1. I will then interpret the research findings and check if they the data agree with the aims and proposal of my research and will provide recommendation if theres need. 1.9 Definitions Urban An urban area is characterized by higher population density and vast human features in comparison to areas surrounding it. Urban areas may be cities, towns. Pollution- Pollution is the introduction of a contaminant into the environment. It is created mostly by human actions, but can also be a result of natural disasters. Pollution has a detrimental effect on any living organism in an environment, making it virtually impossible to sustain life. Noise- Noise is unwanted electrical or electromagnetic energy that degrades the quality of signals and data.   Noise occurs in digital and analog systems, and can affect files and communications of all types, including text, programs, images, audio, and telemetry.

Friday, January 17, 2020

How to Be Successful Business Manager

How to be successful Business Manager Introduction Everyone desired to success. In the business field, becoming a successful manager is what the majority long for . No body changes into a well-rounded manager overnight. Learning is necessary for everyone during this process. â€Å"People learn to manage by managing under guidance of a good manager†Ã¢â‚¬â€-Michael Armstrong. Experiences can undoubtedly be an excellent tutor for success, but having a good guide can further allow one to make use of his or her experiences into the largest extent. Being a successful manager requires a huge amount of skills and knowledge.In this paper, not all elements will be mentioned, but the five essential elements have been chosen to be discussed: work smart , risk management, interpersonal talent and skills, Self-management and Leadership skills. Work Smart – Work Effectively and Efficiently To express how to work smartly, Susanne Madsen, a PRINCE2 and MSP practitioner and a qualified Corporate and Executive coach, tells us her tale of success. Work smarter, not harder, as explained by Susanne, means working less and accomplishing more by increasing the working quality. (2)To get things done wisely, she highlights the significance for correcting one’s internal persuasion and attitude towards his or her task that it is not a burden. One could , then, get the vigor for working again. According to the writer’s own experience, she shows that self-assessment and continuous self-adjustment are the paramount importance for success. She also suggests two key points: first, is deputation. A manager should depute the less vital jobs to the others and use those times to communicate more with the major person associated to the project for developing a better linkage with them.Second, is initiative. A manager should get a more well-round plan at the beginning instead of responding the incident or risk when it has been occurred. I am glad to read the inspirationa l article written by a top leader in the world. She gives a great arousal to me. Her sharing and skills are useful, not only at the workplace, but also at the senior academic level. I believe that many college students, just like me, are having the improper belief that the project is burdensome. Why can’t I alter my mindset to abandon the constraints and step up to success smartly?Just try it! Self-management Walter Vieira (2005) proves that people refuse to face realistic self-image and idealistic self-image is biased,(p. 33) and so to evade the results of their SWOT analysis which about their advantages, disadvantages, opportunities and threat environment. (p. 34) However, people do not understand their strength and weakness, will not be able to strengthen and improve , they cannot develop their abilities. In addition, they need to assess that whether they have the basis element include a positive goal, ambition and energy .They also need to know whether they have clearly a ware of their thoughts adaptable ,witty and creativity. He said that people have two reason of requesting an identity, one is for higher living standards, and another is satisfying others' expectation. He agreed with Cyrus Vance that people always evaluate their progress with friends. They were unhappy because their achievements are worse than the others. (p. 35) Therefore he reminded that people not to compare and assess their progress with others, because different people do different things ,will have different progress, as long as compliance with their own plans on the line. p. 35-36) But learning from observing others ,and the through others to observe their own, then the combined best quality of themselves and others. (p. 36) In my opinion, successful managers need to have a correct plan; in order to achieve their positive goals. I think Walter Vieira missed this important point. Therefore, they should know how to assess whether their plan is feasible under environment efforts . For example, their plan need to change when financial crisis. Leadership skills Leadership is the central factor to influence a general manager's success.In the book Successful Management, Neville (1995) believes that leadership has five main dimensions: first, defining a vision is the preliminary stage. The best leaders are adroit at thinking the unique and picking up creativity . They are not faint-hearted of change. When the vision has been ascertained, leaders should be attain it (17-8). Second, he said that commitment to success is not only about eagerness, drive and the will to win, it is also about the interminability of preparation (18). Third, he believes that leader should communicate unabashedly and frankly.Because communicate can let the team learn more experiences from each other (18). Fourth, is challenging in status quo. Young managers should find more innovative solutions in the business market (19). Finally, is about the personal characteristics which are found in extraordinary leaders. He highlights that flexibility, enthusiasm, integrity, willingness to experiment, ability to inspire others, to build relationship, to inspire trust, to communicate and to delegate, those are most routinely emphasize in outstanding leaders(19).He concludes that intellect is missing from these five dimensions, it is not nonessential, just because the role of leadership is too wide and it is not always necessary to have high intellect (19). To a large extent, I agree with Neville that the five main dimensions of leadership. I believe that communication is the most important part of the leadership, it permeates every aspect of the business. All managers, especially those leading teams, they communicating with the workforce is a demanding and rewarding task. Interpersonal talent and skills A successful businessman must possess good communication skills.He should provide different channels for the staff to express opinions which helps understandings of companyâ₠¬â„¢s objectives. Effective communication with employees can ensure thorough understandings of leader’s decisions and expectations on their work. Maintaining a comfortable working environment can  encourage staff to be more willing to talk to their senior level (219). Reduced status difference (219) and enhanced staff morale help collection of employees’ feedbacks and  suggestions, which contributes to company improvements and also help ensure all staffs are working towards same goals.I believe a successful businessman should be a good leader and team player. It is very important that a person can share his knowledge and experience with his employees. He should ensure his messages and decisions can be effectively convey to them. To build up a good relationship with staff, he should respect and encourage them to participate in production and provide new ideas in work. Appreciation of work participation creates good staff morale and motivates them to work towards sam e organizational goals. A person will not be successful if he cannot work well with his subordinates and employees.Even if he has lots of marvelous ideas but no one is willing to work with him, he will only be a talented individual instead of a successful leader in the company. Risk management skills Living with, and challenged by risks, Nick Jackson believed that risk has tightly stick onto the business management agenda. Just as the key, people who learn for robust the risk management can succeed in their business. No matter what risk are the leaders facing, the major challenge is that how to identify, tackle and monitor the risk, and to plan for understandable, maintainable and applicable contingency plan.Not much people could reach this in industry today, he thought, they can recognize the risks whereas fail to observe the dormant influence or lost the sight of controlling systems. There are only less than 30% organizations can manage the risk well even not in effective way. (p. 38) Moreover, Nick stated that the ‘risk’ would be in some positive terms instead of negative. He regarded that people should not only keep improving in the way they manage the difficulties, but also focus on how to avoid from the negative impacts of self-satisfied when they are in advantages. (p. 42)I have been reminded by Nick Jackson that in every single moment, we cannot slack off from supervise the risk in any format. It required the all-round critical thinking. We cannot stop monitor the business from tackled one risk. It is hard and tough to handle this section in the business. And it is a great challenge to all businessman or administrator. g Conclusion â€Å"The productivity of work is not the responsibility of the worker but of the manager. † – Peter F. Drucker, Expert Management Consultant. All in all, the five abilities which stated on the above passages are inseparable.These five categories has shown that how can the manager strengthen the pr oductivity and competitiveness in the industry. They can end up this topic with one sentence, â€Å"Manage yourself well then lead the team to work with communication and risk management plan smartly. † Within this project, we recognized that these skills are not only applicable in the business related industry, but also in different criteria of different industries. In 21st Century, people deserve better quality of services and the bosses deserve higher efficiency of works from their staff. As the result, this paper will be valuable to majority of worldReference Bain, Neville. , â€Å"Management or leadership? † Successful Management. 1st ed. London: Macmillan Press Ltd,14-9. Print C. N. Cheng. â€Å"Communication in Business. † Introduction to Business Studies. (Revised Edition). 1st ed. Hong Kong: Hong Kong Educational Publishing Co, 2009. 212-231. Print Kinicki, Angelo. , and Williams Brian K. â€Å"The nature of leadership. † Management: A Practical Introduction. 4th ed. Americas, New York: McGraw-Hill/Irwin, 2009. 436-38. Print. Nick, Jackson. â€Å"Risk is on the corporate agenda, but where does it fit? †Managing Business Risk. 3rd ed.Great Britain: Kogan Page Ltd,2006. Print Samson, Danny. , and Richard L. Daft. â€Å"Leadership in organisation. † Fundamentals of management. 2nd ed. South Melbourne, Victoria, Australia: Thomson Learning Australia, 2005. 427-31. Print. Susanne Madsen. â€Å"My Story:Work smarter not harder†Projectsmart. co. uk. Projectsmart ,10 September 2011. Web. 10 April 2012 Walter Vieira. Manager to CEO :corporate wisdom for survival and success. New Dehli/Thousand Oaks/London:Tejeshwar Singh for Response Book -stage Publications Inc,2005. print

Thursday, January 9, 2020

The functions of an Insurance Firm - Free Essay Example

Sample details Pages: 19 Words: 5637 Downloads: 7 Date added: 2017/06/26 Category Business Essay Type Analytical essay Did you like this example? The insurance firms functions making insurance products and attains profitability through charging premiums exceeding overall expenses of the firm and making wise investment decisions in maximizing returns under varied risk conditions. The method of charging premiums depends on various underlying factors such as number of policy holders, number of claims, amount of claims, health condition, age, gender of the policy holder and so on. Some of these factors such as loss aggregate claims and human mortality rates have adverse impact on determining the premium calculation to remain solvent. Don’t waste time! Our writers will create an original "The functions of an Insurance Firm" essay for you Create order Likewise, these factor need to be modelled using large amount of data, loads of simulations and complex algorithms to determine and manage risk. In this dissertation, we shall consider two important factors affecting the premiums, the aggregate loss claims and human mortality. We shall use theoretical simulations using R and use Danish loss insurance data to model aggregate claims. The Human Mortality Database (HMD)1 is used and smoothed human mortality rates are computed to price life insurance products respectively. In chapter 2, we shall examine the concepts of compounds distribution in modelling aggregate claim and perform simulations of the compound distribution using R packages such as MASS and Actuar. Finally, we shall analyse Danish loss insurance data from 1980 to 1990 and fit appropriate distributions using customized generically implemented R methods. In chapter 3 we shall explain briefly on concepts of graduation, generalised linear models and smoothing technique s using P-splines. We shall obtain deaths and exposure data from human mortality database for selected countries Sweden and Scotland and implement mortality rates smoothing using MortalitySmooth package. We compare the mortality rates based on various sets such as males and females for specific country or total mortality rates across countries like Sweden and Scotland for a given time frame ranging age wise or year wise. In chapter 4, we shall look into various life insurance and pension related products widely used in the insurance industry and construct life tables and commutation functions to implement annuity values. Finally, we shall provide the concluding comments of this dissertation in chapter 5. Chapter 2 Aggregate Claim distribution 2.1 Background Insurance based companies implement numerous techniques to evaluate the underlying risk of their assets, products and liabilities on a day- to-day basis for many purposes. These include Computation of premiums Initial reserving to cover the cost of future liabilities Maintain solvency Reinsurance agreement to protect from large claims In general, the occurrence of claims is highly uncertain and has underlying impact on each of the above. Thus modelling total claims is of high importance to ascertain risk. In this chapter, we shall define claim distributions and aggregate claims distributions and discuss some probabilistic distributions fitting the model. We also perform simulations, goodness of fit on data and conclude this chapter by fitting aggregate claim distribution to Danish fire loss insurance data. 2.2 Modelling Aggregate Claims The dynamics of insurance industry has different effects on the number of claims and amount of claims. For instance, expanding insurance business would have proportional increase on number of claims but negligible or no impact on amount of claims. Conversely, cost control initiatives, technology innovations have adverse effect on amount of claims but have zero effect on number of claims. Consequently, the aggregate claim is modelled based on the assumption that the number of claims occurring and amount of claims are modelled independently. 2.2.1 Compound distribution model We define compound distribution as follows S Random variable denoting the total claims occurring in a fixed period of time. Denote the claim amount representing the i-th claim. N Non-negative, independent random variable denoting number of claims occurring in a time. Further, is a sequence of i.i.d random variables with probability density function given by and cumulative density function by with probability of 0 is 1 for 1iN. Then we obtain the aggregate claims2 S as follows With Expectation and variance of S found as follows Thus S, the aggregate claims is computed using Collective Risk Model3 and follows compound distribution. 2.3 Compound Distributions for Aggregate Claims As discussed in Section 2.2, S follows compound distribution, were the number of claims (N) is the primary distribution and the amount of claims(X) being secondary distribution. In this Section we shall describe the three main compound distributions widely used to model aggregate claims models. The primary distribution can be modelled based on non-negative integer valued distributions like Poisson, binomial and negative binomial. The selection of a distribution depends from case to case. 2.3.1 Compound Poisson distribution The Poisson distribution is referred to distribution of occurrence of rare events, number of accidents per person, number of claims per insurance policy and numbers of defects found in product manufacturing are some of the real time examples of Poisson distribution. Here, the primary distribution N has a Poisson distribution denoted by N ~ P(ÃÆ'Ã… ½Ãƒâ€šÃ‚ » with parameter ÃÆ'Ã… ½Ãƒâ€šÃ‚ ». The probability density function, expectation and variance are given as follows for x=0,1. Then S has compound Poisson distribution with parameters ÃÆ'Ã… ½Ãƒâ€šÃ‚ » and denoted as follows S ~ CP(ÃÆ'Ã… ½Ãƒâ€šÃ‚ », and 2.3.2 Compound Binomial distribution The binomial distribution is referred to distribution of number of successes occurring in an event, the number of males in a company, number of defective components in random sample from a production process are real time examples representing this distribution. The compound binomial distribution is a natural choice to model aggregate claims when there is an upper limit on the number of claims in a given time period. Here, the primary distribution N has a binomial distribution with parameters n and p denoted by N ~ B(n,p. The probability density function, expectation and variance are given as follows For x=0,1,2.n Then S has compound binomial distribution with parameters n, p and denoted as follows S ~ CB(n, p , -p) 2.3.3 Compound Negative Binomial distribution The compound negative binomial distribution models aggregate claim models. The variance of negative binomial is greater than its mean and thus we can use negative binomial over Poisson distribution if the data has greater variance than its mean. This distribution provides a better fit to the data. Here, the primary distribution has a negative binomial distribution with parameters n and p denoted by N ~ NB(n,p with n0 and 0p1. The probability density function, expectation and variance are given as follows for x=0,1,2. Then S has a compound negative binomial distribution with parameters n, p and denoted as follows S ~ CNB(n,p, 2.4 Secondary Distributions Claim Amount Distributions. In previous Section 2.3, we defined the three different compound distributions widely used. In this section, we shall define the generally used distributions to model secondary distributions for claim amounts. We use positive skewed distributions. Some of these distributions include Weibull distribution used frequently in engineering applications. We shall also look into specific distributions such as Pareto and lognormal which are widely used to study loss distributions. 2.4.1 Pareto Distribution The distribution is named after Vilfredo Pareto4 who used it in modelling economic welfares. It is used these days to model income distribution in economics. The random variable X has a Pareto distribution with parameters and ÃÆ'Ã… ½Ãƒâ€šÃ‚ » where, ÃÆ'Ã… ½Ãƒâ€šÃ‚ » 0 and is denoted by X~ ( or X ~ Pareto(, ÃÆ'Ã… ½Ãƒâ€šÃ‚ ») The probability density function, expectation and variance are given as follows For x0 2.4.2 Log normal Distribution The random variable X has a Log normal distribution with parameters and where, 0 and is denoted by X ~ LN(, ), Where, and are the mean and variance of Log(X). The log normal distribution has a positive skew and is a very good distribution to model claim amounts. The probability density function, expectation and variance are given as follows For x0 and 2.4.3 Gamma distribution The gamma distribution is very useful to model claim amount distribution. The distribution has shape parameter , and scale parameter ÃÆ'Ã… ½Ãƒâ€šÃ‚ ». Then the random variable X has a Gamma distribution with parameters and ÃÆ'Ã… ½Ãƒâ€šÃ‚ » where, ÃÆ'Ã… ½Ãƒâ€šÃ‚ » 0 and is denoted by X~ ( or X ~ Gamma(, ÃÆ'Ã… ½Ãƒâ€šÃ‚ ») The probability density function, expectation and variance are given as follows For x0 2.4.4 Weibull Distribution The Weibull distribution is extreme valued distributions, because of its survival function it is used widely in modelling lifetimes. The random variable X has a Weibull distribution with parameters and ÃÆ'Ã… ½Ãƒâ€šÃ‚ » where, ÃÆ'Ã… ½Ãƒâ€šÃ‚ » 0 and is denoted by X~ ( The probability density function, expectation and variance are given as follows For x0 2.5 Simulation of Aggregate claims using R In Section 2.3 we discussed about aggregate claims and the various compound distributions used to model it. In this section we shall perform random simulation using R program. 2.5.1 Simulation using R The simulation of aggregate claims was implemented using packages like Actuar, MASS5. The generic R code available in Programs/Aggregate_Claims_Methods.r is given in Appendix 1 implements simulation of random generated aggregate claim of any compound distribution samples. The following R code below generates simulated aggregate claim data for Compound Poisson distribution with gamma as the claim distribution denoted by CP(10,. require(actuar) require(MASS) source(Programs/Aggregate_Claims_Methods.r) Sim.Sample = SimulateAggregateClaims (ClaimNo.Dist=pois, ClaimNo.Param =list(lambda=10),ClaimAmount.Dist=gamma,ClaimAmount.Param= list(shape = 1, rate = 1),No.Samples=2000 ) names(Sim.Sample) The SimulateAggregateClaims method in Programs/Aggregate_Claims_Methods.r generates and returns simulated aggregate samples along with expected and observed moments. The simulated data can then be used to perform various tests, comparisons and plots. 2.5.2 Comparison of Moments The moments of expected and observed are compared to test the correctness of the data. The following R code returns the expected and observed mean and variance of the simulated data Respectively. Sim.Sample$Exp.Mean;Sim.Sample$Exp.Variance Sim.Sample$Obs.Mean;Sim.Sample$Obs.Variance The Table 2.1 given below shows the simulated values for different sample size. Clearly the observed and expected moments are similar and the difference between them converges as number of sample increases. Samples size 100 1000 10000 100000 Observed Mean 10.431 09.953 10.008 09.986 Expected Mean 10 10 10 10 Observed Variance 20.72481 19.692 20.275 19.810 Expected Variance 20 20 20 20 Table 2.1 Comparison of observed and expected moments for different sample size. 2.5.3 Histogram with curve fitting distributions Histograms can provide useful information on skewness, information on extreme points in the data, the outliers and can be graphically measured or compared with shapes of standard distributions. The figure 2.1 below shows the fitted histogram of simulated data compared with standard distributions like Weibull, Normal, Lognormal and Gamma respectively. The function PlotAggregateClaimsData(Agg.Claims) is used to plot the histogram along with fitted standard distributions. The histogram is plotted by dividing them in to breaks of 50. The simulated data is then fitted using the fitdistr() function in the MASS package and fitted for various distributions like Normal,Lognormal,Gamma and Weibull distribution. The following R program describes how the fitdistr() function in MASS is used to compute the Gamma parameters and plot the respective curve as described in Figure 2.1 gamma = fitdistr(Agg.Claims,gamma) Shape = gamma$estimate[1] Rate= gamma$estimate[2] Scale=1/Rate Left = min(Agg.Claims) Right = max(Agg.Claims) Seq = seq(Left,Right,by= 0.01) lines(Seq,dgamma(Seq,shape=Shape, rate= Rate, scale=Scale), col = blue) Figure 2.1 Histogram of simulated aggregate claims with fitted standard distribution curves. 2.5.4 Goodness of fit The goodness of fit compare the closeness of expected and observed values to conclude whether it is reasonable to accept that the random sample fits a standard distribution or not. It is type of hypothesis testing were the hypotheses are defined as follows. : Data fits with the standard distribution : Data does not fit with the standard distribution The chi-square test is one of the ways to test goodness of fit6. The test uses histogram and compares it with the fitted density. It is used by grouping data into different intervals using k breaks. The breaks are computed using quantiles. This computes the expected frequency,. , the observed frequency is calculated using the product of difference of the c.d.f with sample size. The test statistic is defined as Where is the observed frequency and is expected frequency for k breaks respectively. To perform simulation we shall use breaks of 100 to split the data into equal cells of 100 and use histogram count to gro up the data based on the observed values. Large values of leads to rejecting null hypothesis The test statistic follows distribution with k-p-1 degrees of freedom where p is the number of parameters in the standard fitted distribution. The p-value is computed using 1- pchisq() and is accepted if p-value is greater than the significance level . The following R code computes chi-square test Test.ChiSq=PerformChiSquareTest( Samples.Claims= Sim.Sample$AggregateClaims,No.Samples=N.Samples) Test.ChiSq$DistName Test.ChiSq$X2Val;Test.ChiSq$pvalue Test.ChiSq$Est1; Test.ChiSq$Est2 Test Statistic Gamma Normal Lognormal Weibull 125.466 160.2884 439 91 p-value 5.609* 0 Table 2.2 Chi-Square and p-value for compound Poisson distribution The highest p-value signifies better fit of data with the standard distribution. In the above simulation, table 2.2 explains that Weibull distribution provides a better fit with the following para meters shape =2.348 and scale = 11.32, The eye-ball of the histogram confirms the same. 2.6 Fitting Danish Data 2.6.1 The Danish data source of information In this Section we shall use a statistical model and fit a compound distribution to compute aggregate claims using historical data. Fitting data into a probability distribution using R is an interesting exercise, and is worth quoting All models are wrong, some models are useful George E. P.; Norman R. Draper (1987). In previous section we explained fitting distribution, comparison of moments and goodness of fit to simulated data. The data source used is Danish data7 composed from Copenhagen Reinsurance and contains over 2000 fire loss claims details recorded during 1980 to 1990 period of time. This data is adjusted for inflation replicating 1985 values and are expressed in Danish Krone (DKK) currencies in millions. The data recorded are large values and are adjusted for inflation. There are 2167 rows of data over 11 years. Grouping the data over years results in 11 aggregate samples of data. This would be insufficient information to fit and plot the distribution. Therefore, the dat a is grouped month-wise aggregating to 132 samples. The figure 2.2 shows the time series plot against the aggregate claims inferring the different claims occurred monthly from 1980 to 1990, it also shows the extreme values of loss claims and the time of occurrence. There are no seasonal effects on the data as the 2 sample test for summer and winter data is compared and the t-test value infers there is no difference and conclude that there is no seasonal variation. Figure 2.2 Time series plot of Danish fire loss insurance data month wise starting 1980-1990. The expectation and variance of the aggregate claims are 55.572 and 1440.7 respectively. The expectation and variance of aggregate claims number are 16.41667 and 28.2. As discussed previously in Section 2.3.3, negative binomial distribution can be considered as a natural choice for modelling claim numbers since variance is greater than the mean. The data is plotted and fitted into an histogram using fitdistr() function in MA SS package of R. 2.6.2 Analysis of Danish data We shall do the following steps to analyse and fit the Danish loss insurance data. Obtain the claim numbers and loss aggregate claim data month wise. As discussed in Section 2.6.1, we choose primary distribution to be negative binomial and use fitdistr() function to obtain the parameters. Conduct Chi-square test to test the goodness of fit for claims distribution on aggregate claims and obtain the necessary parameters Simulate for 1000 samples using Section 2.5.1, also plot the histogram along with the fitted standard distributions as described in Section 2.5.2. Perform chi-square test to identify the optimal fit and obtain the distribution parameters. 2.6.3 R program Implementation We will do the following to implement Danish data fitting using R program. The following R code reads the Danish data available in DataDanishData.txt, segregate the claims month wise, to calculate sample mean and variance and plots the histogram with fitted standard distributions. require(MASS) source(Programs/Aggregate_Claims_Methods.r) Danish.Data = ComputeAggClaimsFromData(Data/DanishData.txt) Danish.Data$Agg.ClaimData = round(Danish.Data$Agg.ClaimData, digits = 0) mean(Danish.Data$Agg.ClaimData) var(Danish.Data$Agg.ClaimData) Danish.Data$Agg.ClaimData mean(Danish.Data$Agg.ClaimNos) var(Danish.Data$Agg.ClaimNos) Figure 2.3 Actual Danish fire loss data fitted with standard distributions of 132 samples. In the initial case N has negative binomial distribution with parameter; k= 25.32 and p=.6067. Test Statistic Gamma Normal Lognormal Weibull 95.273 142.243 99.818 118 p-value .53061 .0019 .40199 .072427 T able 23 Chi-Square and p-value for Danish fire loss insurance data Based on chi-square goodness of fit test shown in table 2.3, we shall consider the secondary distribution as gamma distribution with parameters; Shape =3.6559 and scale = 15.21363. We simulate using 1000 samples and obtain aggregate claim samples using Section 2.5.1. The plot and chi square test values are defined below as follows. The generic function PerformChiSquareTest(), previously discussed in Section 2.4 is used here to compute values of and p-value pertaining to = distribution. Figure 2.4 Histogram of simulated samples of Danish data fitted with standard distributions The figure 2.4 above shows simulated samples of Danish data calculated for sample size 1000, it also shows the different distribution curves fitted to the simulated data. The chi-square values are tabulated in table 2.4 below. Test Statistic Normal Gamma Lognormal Weibull 123.32 84.595 125.75 115.50 p-va lue .036844 .8115 .02641 .09699 Table 2.4 Chi-Square and p-value for compound Negative Binomial distribution for Danish insurance loss data. The results described in Table 2.4 suggest that the optimal possible choice of model is Gamma distribution with parameters Shape = 8.446 and Rate = .00931 Chapter 3 Survival models Graduation In the previous Chapter 2, we discussed about aggregate claims and how it can be modelled and simulated using R programming. In this chapter, we shall discuss on one of the important factors leading to the occurrence of a claim, the human mortality. Life insurance companies use this factor to model risk arising out of claims. We shall analyse and investigate the crude data presented in human mortality database for specific countries like Scotland and Sweden and use statistical techniques in smoothing data. MortalitySmooth package is used in smoothing the data based on Bayesian information criterion BIC, a technique used to determine smoothing parameter; we shall also plot the data. Finally we shall conclude by performing comparison of mortality of two countries based on time. 3.1 Introduction Mortality data in simple terms is recording of deaths of species defined in a specific set. This collection of data could vary based on different variables or sets such as sex, age, years, geographical location and beings. In this section we shall use human data grouped based on population of countries, sex, ages and years. Human mortality in urban nations has improved significantly over the past few centuries. This has attributed largely due to improved standard of living and national health services to the public, but in latter decades there has been tremendous improvement in health care which has made strong demographic and actuarial implications. Here we use human mortality data and analyse mortality trend compute life tables and price different annuity products. 3.2 Sources of Data Human mortality database (HMD)1 is used to extract data related to deaths and exposure. These data are collected from national statistical offices. In this dissertation, we shall look into two countries Sweden and Scotland data for specific ages and years. The data for specific countries Sweden and Scotland are downloaded. The deaths and exposure data is downloaded from HMD under Sweden Deaths https://www.mortality.org/hmd/SWE/STATS/Deaths_1x1.txt Exposure https://www.mortality.org/hmd/SWE/STATS/Exposures_1x1.txt Scotland Deaths https://www.mortality.org/hmd/GBR_SCO/STATS/Deaths_1x1.txt Exposure https://www.mortality.org/hmd/GBR_SCO /STATS/Exposures_1x1.txt They are downloaded and saved as .txt data files in the directory under /Data/Conutryname_deaths.txt and /Data/Conutryname_exposures.txt respectively. In general the data availability and formats vary over countries and time. The female and male death and exposure data are shared from raw data. The total column in the data source is calculated using weighted average based on the relative size of the two groups male and female at a given time. 3.3 P-Splines Techniques in Smoothing Data. A well-known actuary, Benjamin Gompertz observed that over a long period of human life time, the force of mortality increases geometrically with age. This was modelled for single year of life. The Gompertz model is linear on the log scale. The Gompertz law8 states that the mortality rate increases in a geometric progression. Hence when death rates are A0 B1 And the linear model is fitted by taking log both sides. = a + bx Where a = and b = The corresponding quadratic model is given as follows 3.3.1 Generalized Linear models and P-Splines in smoothing data Generalized Linear Models (GLM) are an extension of the linear models that allows models to be fit to data that follow probability distributions like Poisson, binomial, and etc. If is the number of deaths at age x and is central exposed to risk then By maximum likelihood estimate we have and by GLM, follows Poisson distribution denoted by with a + bx We shall use P-splines techniques9 in smoothing the data. As mentioned above, the GLM with number of deaths follows Poisson distribution. We fit a quadratic regression using exposure as the offset parameter. The splines are piecewise polynomials usually cubic and they are joined using the property of second derivatives being equal at those points, these joints are defined as knots to fit data. It uses B-splines regression matrix. A penalty function of order linear or quadratic or cubic is used to penalize the irregular behaviour of data by placing a penalty difference. This function is then used in the log likelihood along with smoothing parameter .The equations are maximised to obtain smoothing data. Larger the value of implies smoother is the function but more deviance. Thus, optimal value of is chosen to balance deviance and model complexity. is evaluated using various techniques such as BIC Bayesian information criterion and AIC Akaikes information criterion techniques. Mortalitysmooth package in R implements the techniques mentioned above in smoothing data, There are different options or choices to smooth data using P-splines, The number of knots ndx ,the degree of p-spine whether linear, quadratic or cubic bdeg and the smoothing parameter lambda. The methods in MortalitySmooth package fit a P-splines model with equally-spaced B-splines along x axis. There are four possible methods in this package to smooth data, and BIC is the default value chosen by MortalitySmooth in smoothing data. AIC minimization is also available but BIC provides better outcome for large values. In this di ssertation, we shall smooth the data using default option BIC and using lambda value. 3.4 MortalitySmooth Package in R program implementation In this section we describe the generic implementation of using R programming to read deaths and exposure data from human mortality database and use MortalitySmooth10 package to smooth the data based on P-splines. The following code presents below loads the following require(MortalitySmooth) source(Programs/Graduation_Methods.r) Age -30:90; Year - 1959:1999 country -scotland ;Sex - Males death =LoadHMDData(country,Age,Year,Deaths,Sex ) exposure =LoadHMDData(country,Age,Year,Exposures,Sex ) FilParam.Val -40 Hmd.SmoothData =SmoothedHMDDataset(Age,Year,death,exposure) XAxis - Year YAxis-log(fitted(Hmd.SmoothData$Smoothfit.BIC)[Age==FilParam.Val,]/exposure[Age==FilParam.Val,]) plotHMDDataset(XAxis ,log(death[Age==FilParam.Val,]/exposure[Age==FilParam.Val,]) ,MainDesc,Xlab,Ylab,legend.loc ) DrawlineHMDDataset(XAxis , YAxis ) The MortalitySmooth package is loaded and the generic implementation of methods to execute graduation smoothening is avail able in Programs/Graduation_Methods.r. The step by step description of the code is explained below. Step:1 Load Human Mortality data Method Name LoadHMDData Description Return an object of matrix type which is a mxn dimension with m representing number of ages and n representing number of years. This object is specifically formatted to be used in Mortality2Dsmooth function. Implementation LoadHMDData(Country,Age,Year,Type,Sex) Arguments Country Name of the country for which data to be loaded. If country is Denmark,Sweden,Switzerland or Japan the SelectHMDData function of MortalitySmooth package is called internally. Age Vector for the number of rows defined in the matrix object. There must be at least one value. Year Vector for the number of columns defined in the matrix object. There must be at least one value. Type A value which specifies the type of data to be loaded from Human mortality database. It can take values as Deaths or Exposures Sex An optional filter value based on which data is loaded into the matrix object. It can take values Males, Females and Total. Default value being Total Details The method LoadHMDData in Programs/Graduation_Methods.r reads the data available in the directory named Data to load deaths or exposure for the given parameters. The data can be filtered based on country, age, year, type based on Deaths or Exposures and lastly by sex. Figure: 3.1 Format of matrix objects Death and Exposure for Scotland with age ranging from 30 to 90 and year 1959 to 1999 The Figure 3.1 shows the format used in objects Death and Exposure to store data. A matrix object representing Age in rows and Years in column. The MortalitySmooth package functions only for specific countries listed in the package. They are Denmark,Switzerland,Sweden and Japan. The data for these countries can be directly loaded by using SelectHMDData() function available in MortalitySmooth R package. LoadHMDData function checks the value of the variable country and if Country is equal to any of the four then SelectHMDData() function is implemented else customized generic function i s called to return the data objects. The return matrix objects format in both functions remains exactly the same. Step 2: Smoothing HMD Dataset Method Name SmoothedHMDDataset Description Returns a list of smoothed object based BIC and Lambda of matrix object type which are of mxn dimension with m representing number of Ages and n representing number of years. These object are specifically formatted to be used in Mortality2Dsmooth() function and are customized for mortality data only. Smoothfit.BIC and Smoothfit.fitLAM objects are returned along with fitBIC.Data fitted values. SmoothedHMDDataset (Xaxis,YAxis,ZAxis,Offset.Param) Arguments Xaxis Vector for the abscissa of data used in the function Mortality2Dsmooth in MortalitySmooth package in R. Here, age vector is value of XAxis. Yaxis Vector for the ordinate of data used in the function Mortality2Dsmooth in MortalitySmooth package in R. Here, year vector is value of YAxis. ZAxis Matrix count response used in the function Mortality2Dsmooth in MortalitySmooth package in R. Here, Death is the matrix object value for ZAxis and dimensions of ZAxis must correspond to the length of XAxis and YAxis. Offset.Param A Matrix with prior known values to be included in the linear predictor during fitting the 2d data. Details. The method SmoothedHMDDataset in Programs/Graduation_Methods.r smoothes the data based on the death and exposure objects loaded as defined above in step 1. The Age, year and death are loaded as x-axis, y-axis and z-axis respectively with exposure as the offset parameter. These parameters are internally fitted in Mortality2Dsmooth function available in MortalitySmooth package in smoothing the data. Step3: plot the smoothed data based on user input Method Name PlotHMDDataset Description Plots the smoothed object with user given information such as axis, legend, axis scale and ain description details. Implementation PlotHMDDataset (Xaxis,YAxis,MainDesc,Xlab,Ylab,legend.loc,legend.Val,Plot.Type,Ylim) Arguments Xaxis Vector for plotting X axis value. Here the value would be age or year based on user request. Yaxis Vector for plotting Y axis value. Here the value would be Smoothened log mortality vales filtered for a particular age or year. MainDesc Main plot caption describing about the plot. Xlab X axis label. Ylab Y axis label. legend.loc A customized location of legend. It can take values topright,topleft legend.Val A customized legend description details it can take vector values of type string. Val,Plot.Type An optional value to change plot type. Here default value is equal to default value set in the plot. If value =1, then figure with line is plotted Ylim An optional value to set the height of the Y axis, by default takes max value of vector Y values. Details The generic method PlotHMDDataset in Programs/Graduation_Methods.r plots the smoothened fitted mortality values with an option to customize based on user inputs. The generic method DrawlineHMDDataset in Programs/Graduation_Methods.r plots the line. usually called after PlotHMDDataset method. 3.5 Graphical Representation of Smoothed Mortality Data. In this section we shall look into graphical representation of mortality data for selected countries Scotland and Sweden. The generic program discussed in previous Section 3.4 is used to implement the plot based on user inputs. Log mortality of smoothed data vs. actual fit for Sweden. Figure 3.3 Left panel: Plot of Year vs. log(Mortality) for Sweden based on age 40 and year from 1945 to 2005. The points represent real data and red and blue curves represent smoothed fitted curves for BIC and Lambda =10000 respectively. Right panel: Plot of Age vs. log(Mortality) for Sweden based on year 1995 and age from 30 to 90. The points represent real data red and blue curves represent smoothed fitted curves for BIC and Lambda =10000 respectively. Figure 3.3 describes plot of smoothed mortality vs. actual data for Sweden for ages and years respectively. The actual data are displayed as points and red and blue represents the smoothed curves BIC and lambda. MortalitySmooth package uses default smoothing technique BIC and lambda=10000 to smooth data in two different ways. Log mortality of smoothed data vs. actual fit for Scotland Figure 3.4 Left panel: Plot of Year vs. log(Mortality) for Scotland based on age 40 and year from 1945 to 2005. The points represent real data and red and blue curves represent smoothed fitted curves for BIC and Lambda =10000 respectively. Right panel: Plot of Age vs. log(Mortality) for Scotland based on year 1995 and age from 30 to 90. The points represent real data red and blue curves represent smoothed fitted curves for BIC and Lambda =10000 respectively. Figure 3.4 describes plot of smoothed mortality vs. actual data for Scotland for ages and years respectively. The actual data are displayed as points and red and blue represents the smoothed curves BIC and lambda. MortalitySmooth package uses default smoothing technique BIC and lambda=10000 are set to adjust the smoothing parameter. Log mortality of Females Vs Males for Sweden The Figure 3.5 given below represents the mortality rate for males and females in Sweden for age wise and year wise. 3.5 Left panel reveals that the mortality of male is more than the female over the years and has been a sudden increase of male mortality from mid 1960s till late 1970s for male The life expectancy for Sweden male in 1960 is 71.24 vs. 74.92 for women and it had been increasing for women to 77.06 and just 72.2 for male in the next decade which explains the trend11. The 3.5 Right panel shows the male mortality is more than the female mortality for the year 1995, The sex ratio for male to female is 1.06 at birth and has been consistently decreasing to 1.03 during 15-64 and .79 over 65 and above clearly explaining the trend for Sweden mortality rate increase in males12 is more than in females. Figure 3.5 Left panel: Plot of Year vs. log(Mortality) for Sweden based on age 40 and year from 1945 to 2005. The red and blue points represent real data for males an d females respectively and red and blue curves represent smoothed fitted curves for BIC males and BIC females respectively. Right panel: Plot of Age vs. log(Mortality) for Sweden based on year 2000 and age from 25 to 90. The red and blue points represent real data for males and females respectively and red and blue curves represent smoothed fitted curves for BIC males and BIC females respectively. Log mortality of Females Vs Males for Scotland The figure 3.6 Left panel describes consistent dip in mortality rates but there has been a steady increase in mortality rates of male over female for a long period starting mid 1950s and has been steadily increasing for people of age 40 years. The 3.6 Right panel shows the male mortality is more than the female mortality for the year 1995, The sex ratio for male to female is 1.04 at birth and has been consistently decreasing to .94 during 15-64 and .88 over 65 and above clearly explaining the trend for Scotland mortality rate13 increase in males is more than in females. Figure 3.6 Left panel: Plot of Year vs. log(Mortality) for Scotland based on age 40 and year from 1945 to 2005. The red and blue points represent real data for males and females respectively and red and blue curves represent smoothed fitted curves for BIC males and BIC females respectively. Right panel: Plot of Age vs. log(Mortality) for Scotland based on year 2000 and age from 25 to 90. The red and blue points represent real data for males and females respectively and red and blue curves represent smoothed fitted curves for BIC males and BIC females respectively. Log mortality of Scotland Vs Sweden The figure 3.7 Left Panel shows that the mortality rates for Scotland are more than Sweden and there has been consistent decrease in mortality rates for Sweden beginning mid 1970s where as Scotland mortality rates though decreased for a period started to show upward trend, this could be attributed due to change in living conditions. Figure 3.7 Left panel: Plot of Year vs. log(Mortality) for countries Sweden and Scotland based on age 40 and year from 1945 to 2005. The red and blue points represent real data for Sweden and Scotland respectively and red and blue curves represent smoothed fitted curves for BIC Sweden and BIC Scotland respectively. Right panel: Plot of Year vs. log(Mortality) for countries Sweden and Scotland based on year 2000 and age from 25 to 90. The red and blue points represent real data for Sweden and Scotland respectively and red and blue curves represent smoothed fitted curves for BIC Sweden and BIC Scotland respectively.

Wednesday, January 1, 2020

The modern assembler - Free Essay Example

Sample details Pages: 15 Words: 4404 Downloads: 8 Date added: 2017/06/26 Category Statistics Essay Did you like this example? Q1. Design specification of an assembler with diagram? Ans. Typically a modern assembler creates object code by translating assembly instruction mnemonics into opcodes, and by resolving symbolic names for memory locations and other entities. The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include macro facilities for performing textual substitutionà ¢Ã¢â€š ¬Ã¢â‚¬ e.g., to generate common short sequences of instructions to run inline, instead of in a subroutine. Assemblers are generally simpler to write than compilers for high-level languages, and have been available since the 1950s. Modern assemblers, especially for RISC based architectures, such as MIPS, Sun SPARC, and HP PA-RISC, as well as x86(-64), optimize instruction scheduling to exploit the CPU pipeline efficiently. Don’t waste time! Our writers will create an original "The modern assembler" essay for you Create order There are two types of assemblers based on how many passes through the source are needed to produce the executable program. One-pass assemblers go through the source code once and assumes that all symbols will be defined before any instruction that references them. Two-pass assemblers (and multi-pass assemblers) create a table with all unresolved symbols in the first pass, then use the 2nd pass to resolve these addresses. The advantage in one-pass assemblers is speed which is not as important as it once was with advances in computer speed and capabilities. The advantage of the two-pass assembler is that symbols can be defined anywhere in the program source. As a result, the program can be defined in a more logical and meaningful way. This makes two-pass assembler programs easier to read and maintain. More sophisticated high-level assemblers provide language abstractions such as: Advanced control structures High-level procedure/function declarations and invocations High-level abstract data types, including structures/records, unions, classes, and sets Sophisticated macro processing Object-Oriented features such as encapsulation, polymorphism, inheritance, interfaces Note that, in normal professional usage, the term assembler is often used ambiguously: It is frequently used to refer to an assembly language itself, rather than to the assembler utility. Thus: CP/CMS was written in S/360 assembler as opposed to ASM-H was a widely-used S/370 assembler. Assembly language A program written in assembly language consists of a series of instructionsmnemonics that correspond to a stream of executable instructions, when translated by an assembler, that can be loaded into memory and executed. For example, an x86/IA-32 processor can execute the following binary instruction as expressed in machine language: Binary: 10110000 01100001 (Hexadecimal: B0 61) The equivalent assembly language representation is easier to remember (example in Intel syntax, more mnemonic): MOV AL, #61h This instruction means: Move the value 61h (or 97 decimal; the h-suffix means hexadecimal; the hash sign means move the immediate value, not location) into the processor register named AL. The mnemonic mov represents the opcode 1011 which moves the value in the second operand into the register indicated by the first operand. The mnemonic was chosen by the instruction set designer to abbreviate move, making it easier for the programmer to remember. A comma-separated list of arguments or parameters follows the opcode; this is a typical assembly language statement. In practice many programmers drop the word mnemonic and, technically incorrectly, call mov an opcode. When they do this they are referring to the underlying binary code which it represents. To put it another way, a mnemonic such as mov is not an opcode, but as it symbolizes an opcode, one might refer to the opcode mov for example when one intends to refer to the binary opcode it symbolizes rather than to the symbol the mnemonic itself. As few modern programmers have need to be mindful of actually what binary patterns are the opcodes for specific instructions, the distinction has in practice become a bit blurred among programmers but not among processor designers. Transforming assembly into machine language is accomplished by an assembler, and the reverse by a disassembler. Unlike in high-level languages, there is usually a one-to-one correspondence between simple assembly statements and machine language instructions. However, in some cases, an assembler may provide pseudoinstructions which expand into several machine language instructions to provide commonly needed functionality. For example, for a machine that lacks a branch if greater or equal instruction, an assembler may provide a pseudoinstruction that expands to the machines set if less than and branch if zero (on the result of the set instruction). Most full-featured assemblers also provide a rich macro language (discussed below) which is used by vendors and programmers to generate more complex code and data sequences. Each computer architecture and processor architecture has its own machine language. On this level, each instruction is simple enough to be executed using a relatively small number of electronic circuits. Computers differ by the number and type of operations they support. For example, a new 64-bit machine would have different circuitry from a 32-bit machine. They may also have different sizes and numbers of registers, and different representations of data types in storage. While most general-purpose computers are able to carry out essentially the same functionality, the ways they do so differ; the corresponding assembly languages reflect these differences. Multiple sets of mnemonics or assembly-language syntax may exist for a single instruction set, typically instantiated in different assembler programs. In these cases, the most popular one is usually that supplied by the manufacturer and used in its documentation. Language design Basic elements Any Assembly language consists of 3 types of instruction statements which are used to define the program operations: opcode mnemonics data sections assembly directives Opcode mnemonics Instructions (statements) in assembly language are generally very simple, unlike those in high-level languages. Generally, an opcode is a symbolic name for a single executable machine language instruction, and there is at least one opcode mnemonic defined for each machine language instruction. Each instruction typically consists of an operation or opcode plus zero or more operands. Most instructions refer to a single value, or a pair of values. Operands can be either immediate (typically one byte values, coded in the instruction itself) or the addresses of data located elsewhere in storage. This is determined by the underlying processor architecture: the assembler merely reflects how this architecture works. Data sections There are instructions used to define data elements to hold data and variables. They define what type of data, length and alignment of data. These instructions can also define whether the data is available to outside programs (programs assembled separately) or only to the program in which the data section is defined. Assembly directives / pseudo-ops Assembly Directives are instructions that are executed by the Assembler at assembly time, not by the CPU at run time. They can make the assembly of the program dependent on parameters input by the programmer, so that one program can be assembled different ways, perhaps for different applications. They also can be used to manipulate presentation of the program to make it easier for the programmer to read and maintain. (For example, pseudo-ops would be used to reserve storage areas and optionally their initial contents.) The names of pseudo-ops often start with a dot to distinguish them from machine instructions. Some assemblers also support pseudo-instructions, which generate two or more machine instructions. Symbolic assemblers allow programmers to associate arbitrary names (labels or symbols) with memory locations. Usually, every constant and variable is given a name so instructions can reference those locations by name, thus promoting self-documenting code. In executable code, the name of each subroutine is associated with its entry point, so any calls to a subroutine can use its name. Inside subroutines, GOTO destinations are given labels. Some assemblers support local symbols which are lexically distinct from normal symbols (e.g., the use of 10$ as a GOTO destination). Most assemblers provide flexible symbol management, allowing programmers to manage different namespaces, automatically calculate offsets within data structures, and assign labels that refer to literal values or the result of simple computations performed by the assembler. Labels can also be used to initialize constants and variables with relocatable addresses. Assembly languages, like most other computer languages, allow comments to be added to assembly source code that are ignored by the assembler. Good use of comments is even more important with assembly code than with higher-level languages, as the meaning and purpose of a sequence of instructions is harder to decipher from the code itself. Wise use of these facilities can greatly simplify the problems of coding and maintaining low-level code. Raw assembly source code as generated by compilers or disassemblers à ¢Ã¢â€š ¬Ã¢â‚¬  code without any comments, meaningful symbols, or data definitions à ¢Ã¢â€š ¬Ã¢â‚¬  is quite difficult to read when changes must be made. Macros Many assemblers support macros, programmer-defined symbols that stand for some sequence of text lines. This sequence of text lines may include a sequence of instructions, or a sequence of data storage pseudo-ops. Once a macro has been defined using the appropriate pseudo-op, its name may be used in place of a mnemonic. When the assembler processes such a statement, it replaces the statement with the text lines associated with that macro, then processes them just as though they had appeared in the source code file all along (including, in better assemblers, expansion of any macros appearing in the replacement text). Since macros can have short names but expand to several or indeed many lines of code, they can be used to make assembly language programs appear to be much shorter (require less lines of source code from the application programmer as with a higher level language). They can also be used to add higher levels of structure to assembly programs, optionally introduce embedded de-bugging code via parameters and other similar features. Many assemblers have built-in macros for system calls and other special code sequences. Macro assemblers often allow macros to take parameters. Some assemblers include quite sophisticated macro languages, incorporating such high-level language elements as optional parameters, symbolic variables, conditionals, string manipulation, and arithmetic operations, all usable during the execution of a given macros, and allowing macros to save context or exchange information. Thus a macro might generate a large number of assembly language instructions or data definitions, based on the macro arguments. This could be used to generate record-style data structures or unrolled loops, for example, or could generate entire algorithms based on complex parameters. An organization using assembly language that has been heavily extended using such a macro suite can be considered to be working in a higher-level language, since such programmers are not working with a computers lowest-level conceptual elements. Macros were used to customize large scale software systems for specific customers in the mainframe era and were also used by customer personnel to satisfy their employers needs by making specific versions of manufacturer operating systems; this was done, for example, by systems programmers working with IBMs Conversational Monitor System/Virtual Machine (CMS/VM) and with IBMs real time transaction processing add-ons, CICS, Customer Information Control System, and ACP/TPF, the airline/financial system that began in the 1970s and still runs many large Global Distribution Systems (GDS) and credit card systems today. It was also possible to use solely the macro processing capabilities of an assembler to generate code written in completely different languages, for example, to generate a version of a program in Cobol using a pure macro assembler program containing lines of Cobol code inside assembly time operators instructing the assembler to generate arbitrary code. This was because, as was realized in the 1970s, the concept of macro processing is independent of the concept of assembly, the former being in modern terms more word processing, text processing, than generating object code. The concept of macro processing in fact appeared in and appears in the C programming language, which supports preprocessor instructions to set variables, and make conditional tests on their values. Note that unlike certain previous macro processors inside assemblers, the C preprocessor was not Turing-complete because it lacked the ability to either loop or go to, the latter allowing the programmer to loop. Despite the power of macro processing, it fell into disuse in high level languages while remaining a perennial for assemblers. This was because many programmers were rather confused by macro parameter substitution and did not disambiguate macro processing from assembly and execution. Macro parameter substitution is strictly by name: at macro processing time, the value of a parameter is textually substituted for its name. The most famous class of bugs resulting was the use of a parameter that itself was an expression and not a simple name when the macro writer expected a name. In the macro: foo: macro a load a*b the intention was that the caller would provide the name of a variable, and the global variable or constant b would be used to multiply a. If foo is called with the parameter a-c, an unexpected macro expansion occurs. To avoid this, users of macro processors learned to religiously parenthesize formal parameters inside macro definitions, and callers had to do the same to their actual parameters. PL/I and C feature macros, but this facility was underused or dangerous when used because they can only manipulate text. On the other hand, homoiconic languages, such as Lisp, Prolog, and Forth, retain the power of assembly language macros because they are able to manipulate their own code as data. Support for structured programming Some assemblers have incorporated structured programming elements to encode execution flow. The earliest example of this approach was in the Concept-14 macro set, originally proposed by Dr. H.D. Mills (March, 1970), and implemented by Marvin Kessler at IBMs Federal Systems Division, which extended the S/360 macro assembler with IF/ELSE/ENDIF and similar control flow blocks.[3] This was a way to reduce or eliminate the use of GOTO operations in assembly code, one of the main factors causing spaghetti code in assembly language. This approach was widely accepted in the early 80s (the latter days of large-scale assembly language use). A curious design was A-natural, a stream-oriented assembler for 8080/Z80 processors from Whitesmiths Ltd. (developers of the Unix-like Idris operating system, and what was reported to be the first commercial C compiler). The language was classified as an assembler, because it worked with raw machine elements such as opcodes, registers, and memory references; but it incorporated an expression syntax to indicate execution order. Parentheses and other special symbols, along with block-oriented structured programming constructs, controlled the sequence of the generated instructions. A-natural was built as the object language of a C compiler, rather than for hand-coding, but its logical syntax won some fans. There has been little apparent demand for more sophisticated assemblers since the decline of large-scale assembly language development.In spite of that, they are still being developed and applied in cases where resource constraints or peculiarities in the target systems architecture prevent the effective use of higher-level languages.[ Q2. Assembler converts mnemonic code in to machine understandable form? Ans. A simple instruction in assembly language such as add 4 and 5 may look like 00111101001 in machine language . How computer realize which is add instruction and which are numbers in the set of above binary numbers ? actually an instruction like add 4 and 5 would translate into something like MVI R1, 4 MVI R2, 5 ADD R1, R2 in assembly It would then be translated into its sequence of opcodes as 00111100 00111101 01111100 now assume that the electronic circuit of your processor looks at high bits as a high voltage of say 5V and the low bits as 0 V so as soon as the 1st instruction is read (00111100), into memory the relevant electronic voltages are generated and 4 gets stored in Register R1. Similarly 5 gets stored in register R2. The last instruction selects the adder circuit and passes the contents of the two registers to it as input which then outputs the sum which then gets stored in one register as well. Q3. Machine language and Assembly language both are low level languages but still not similar. Give examples? Write a program to display your name using any of machine languages? Ans. In computer science, a low-level programming language is a language that provides little or no abstraction from a computers instruction set architecture. The word low refers to the small or nonexistent amount of abstraction between the language and machine language; because of this, low-level languages are sometimes described as being close to the hardware. A low-level language does not need a compiler or interpreter to run; the processor for which the language was written is able to run the code without using either of these. By comparison, a high-level programming language isolates the execution semantics of a computer architecture from the specification of the program, making the process of developing a program simpler and more understandable. Low-level programming languages are sometimes divided into two categories: first generation, and second generation. First generation The first-generation programming language, or 1GL, is machine code. It is the only language a microprocessor can understand directly. Currently, programmers almost never write programs directly in machine code, because not only does it (like assembly language) require attention to numerous details which a high-level language would handle automatically, but it also requires memorizing or looking up numerical codes for every instruction that is used. For this reason, second generation programming languages abstract the machine code one level. Example: A function in 32-bit x86 machine code to calculate the nth Fibonacci number: 8B542408 83FA0077 06B80000 0000C383 FA027706 B8010000 00C353BB 01000000 B9010000 008D0419 83FA0376 078BD98B C84AEBF1 5BC3 Second generation The second-generation programming language, or 2GL, is assembly language. It is considered a second-generation language because while it is not a microprocessors native language, an assembly language programmer must still understand the microprocessors unique architecture (such as its registers and instructions). These simple instructions are then assembled directly into machine code. The assembly code can also be abstracted to another layer in a similar manner as machine code is abstracted into assembly code. Example: The same Fibonacci number calculator as above, but in x86 assembly language using MASM syntax: fib: mov edx, [esp+8] cmp edx, 0 ja @f mov eax, 0 ret @@: cmp edx, 2 ja @f mov eax, 1 ret @@: push ebx mov ebx, 1 mov ecx, 1 @@: lea eax, [ebx+ecx] cmp edx, 3 jbe @f mov ebx, ecx mov ecx, eax dec edx jmp @b @@: pop ebx ret Assembly languages are close to a one to one correspondence between symbolic instructions and executable machine codes. Assembly languages also include directives to the assembler, directives to the linker, directives for organizing data space, and macros. Macros can be used to combine several assembly language instructions into a high level language-like construct (as well as other purposes). There are cases where a symbolic instruction is translated into more than one machine instruction. But in general, symbolic assembly language instructions correspond to individual executable machine instructions. High level languages are abstract. Typically a single high level instruction is translated into several (sometimes dozens or in rare cases even hundreds) executable machine language instructions. Some early high level languages had a close correspondence between high level instructions and machine language instructions. For example, most of the early COBOL instructions translated into a very obvious and small set of machine instructions. The trend over time has been for high level languages to increease in abstraction. Modern object oriented programming languages are highly abstract (although, interestingly, some key object oriented programming constructs do translate into a very compact set of machine instructions). Assembly language is much harder to program than high level languages. The programmer must pay attention to far more detail and must have an intimate knowledge of the processor in use. But high quality hand crafted assembly language programs can run much faster and use much less memory and other resources than a similar program written in a high level language. Speed increases of two to 20 times faster are fairly common, and increases of hundreds of times faster are occassionally possible. Assembly language programming also gives direct access to key machine features essential for implementing certain kinds of low level routines, such as an operating system kernel or microkernel, device drivers, and machine control. High level programming languages are much easier for less skilled programmers to work in and for semi-technical managers to supervise. And high level languages allow faster development times than work in assembly language, even with highly skilled programmers. Development time increases of 10 to 100 times faster are fairly common. Programs written in high level languages (especially object oriented programming languages) are much easier and less expensive to maintain than similar programs written in assembly language (and for a successful software project, the vast majority of the work and expense is in maintenance, not initial development). Q4. Assembler can perform operations of search and sort? Give your comments? Ans. Assemblers can perform operations of search and sort. Below are the examples: Sorting: Again: MOV FLAG, 0 ; FLAGà ¢Ã¢â‚¬  ? 0 Next: MOV AL, [BX] CMP AL, [BX+1] ; Compare current and next values JLE Skip ; Branch if current XCHG AL, [BX+1] ; If not, Swap the contents of the MOV [BX+1], AL ; current location with the next one MOV FLAG, 1 ; Indicate the swap Skip: INC BX ; BXà ¢Ã¢â‚¬  ? BX +1 LOOP Next ; Go to next value CMP FLAG, 1 ; Was there any swap JE Again ; If yes Repeat process RET Searching: MOV FLAG, 0 ; FLAGà ¢Ã¢â‚¬  ? 0 Next: CMP AX, [BX + SI] ; Compare current value to VAL JE Found ; Branch if equal ADD SI, 2 ; SIà ¢Ã¢â‚¬  ? SI + 2, next value LOOP Next ; Go to next value JMP Not_Found Found: MOV FLAG, 1 ; Indicate value found MOV POSITION, SI ; Return index of value in list Not_Found: RET Q5. Assemblers which can be developed? Ans. The following information describes some of the changes that are specific to assembler programs: In the TPF 4.1 system, assembler programs were limited to 4 KB in size; in the z/TPF system, assembler programs can be larger than 4 KB. To exploit this capability, you can change your assembler programs to use: o The CLINKC, RLINKC, and SLINKC assembler linkage macros o Multiple base registers o Baseless instructions. * You can use the CALLC general macro in assembler programs to call C language functions. * In the TPF 4.1 system, the TMSPC and TMSEC macros were provided to set up the interface between C language programs and macro service routines written in assembler language. In the z/TPF system, the PRLGC and EPLGC macros set up this interface by simulating the prolog and epilog code generated by the GCC. The PRLGC and EPLGC macros were provided on the TPF 4.1 system through APAR PJ29640 so that new C library functions written on the TPF 4.1 system can be migrated with little or no changes; and the TMSPC and TMSEC macros are still supported on the z/TPF system so that library functions that were already coded with those macros can be migrated with little or no code changes. New library functions that are developed for z/TPF system must be coded with the PRLGC and EPLGC macros. Q6. Problems which can be resolved using two pass assembler? Can we generate both the passes in a single pass or not? Give your comments with example? Ans. A computer language in which each statement corresponds to one of the binary instructions recognized by the CPU. Assembly-language programs are translated into machine code by an assembler. Assembly languages are more cumbersome to use than regular (or high-level) programming languages, but they are much easier to use than pure machine languages, which require that all instructions be written in binary code. Complete computer programs are seldom written in assembly language. Instead, assembly language is used for short procedures that must run as fast as possible or must do special things to the computer hardware. For example, Figure 17 shows a short routine that takes a number, checks whether it is in the range 97 to 122 inclusive, and subtracts 32 if so, otherwise leaving the number unchanged. (That particular subtraction happens to convert all lowercase ASCII codes to their uppercase equivalents.) This particular assembly language is for the Intel 8086 family of processors (which includes all PC-compatible computers); assembly languages for other processors are different. Everything after the semicolon in each line is a comment, ignored by the computer. Two lines (PROC and ENDP) are pseudo instructions; they tell the assembler how the program is organized. All the other lines translate directly into binary codes for the CPU. Many of the most common operations in computer programming are hard to implement in assembly language. For example, there are no assembly language statements to open a file, print a number, or compute a square root. For these functions the programmer must write complicated routines from scratch, use services provided by the operating system, or call routines in a previously written library. There are two types of assemblers based on how many passes through the source are needed to produce the executable program. One-pass assemblers go through the source code once and assumes that all symbols will be defined before any instruction that references them. Two-pass assemblers (and multi-pass assemblers) create a table with all unresolved symbols in the first pass, then use the 2nd pass to resolve these addresses. The advantage in one-pass assemblers is speed which is not as important as it once was with advances in computer speed and capabilities. The advantage of the two-pass assembler is that symbols can be defined anywhere in the program source. As a result, the program can be defined in a more logical and meaningful way. This makes two-pass assembler programs easier to read and maintain. More sophisticated high-level assemblers provide language abstractions such as: Advanced control structures High-level procedure/function declarations and invocations High-level abstract data types, including structures/records, unions, classes, and sets Sophisticated macro processing Object-Oriented features such as encapsulation, polymorphism, inheritance, interfaces The translation performed by an assembler is essentially a collection of substitutions: machine operation code for mnemonic machine address for symbolic machine encoding of a number for its character representation, etc. Except for one factor these substitutions could all be performed in one sequential pass over source text.The factor is the forward reference(reference to an instruction which has not yet been scanned by assembler). Now its that the separate passes of two pass assemblers are required to handle forward references without restriction. Now if we impose certain restriction that means handling forward references without making two passes. These different sets of restrictions lead to one pass assembler. And these one-pass assembler are particularly attractive when secondary storage is either slow or missing entirely, as on many small machines.