Friday, January 17, 2020

How to Be Successful Business Manager

How to be successful Business Manager Introduction Everyone desired to success. In the business field, becoming a successful manager is what the majority long for . No body changes into a well-rounded manager overnight. Learning is necessary for everyone during this process. â€Å"People learn to manage by managing under guidance of a good manager†Ã¢â‚¬â€-Michael Armstrong. Experiences can undoubtedly be an excellent tutor for success, but having a good guide can further allow one to make use of his or her experiences into the largest extent. Being a successful manager requires a huge amount of skills and knowledge.In this paper, not all elements will be mentioned, but the five essential elements have been chosen to be discussed: work smart , risk management, interpersonal talent and skills, Self-management and Leadership skills. Work Smart – Work Effectively and Efficiently To express how to work smartly, Susanne Madsen, a PRINCE2 and MSP practitioner and a qualified Corporate and Executive coach, tells us her tale of success. Work smarter, not harder, as explained by Susanne, means working less and accomplishing more by increasing the working quality. (2)To get things done wisely, she highlights the significance for correcting one’s internal persuasion and attitude towards his or her task that it is not a burden. One could , then, get the vigor for working again. According to the writer’s own experience, she shows that self-assessment and continuous self-adjustment are the paramount importance for success. She also suggests two key points: first, is deputation. A manager should depute the less vital jobs to the others and use those times to communicate more with the major person associated to the project for developing a better linkage with them.Second, is initiative. A manager should get a more well-round plan at the beginning instead of responding the incident or risk when it has been occurred. I am glad to read the inspirationa l article written by a top leader in the world. She gives a great arousal to me. Her sharing and skills are useful, not only at the workplace, but also at the senior academic level. I believe that many college students, just like me, are having the improper belief that the project is burdensome. Why can’t I alter my mindset to abandon the constraints and step up to success smartly?Just try it! Self-management Walter Vieira (2005) proves that people refuse to face realistic self-image and idealistic self-image is biased,(p. 33) and so to evade the results of their SWOT analysis which about their advantages, disadvantages, opportunities and threat environment. (p. 34) However, people do not understand their strength and weakness, will not be able to strengthen and improve , they cannot develop their abilities. In addition, they need to assess that whether they have the basis element include a positive goal, ambition and energy .They also need to know whether they have clearly a ware of their thoughts adaptable ,witty and creativity. He said that people have two reason of requesting an identity, one is for higher living standards, and another is satisfying others' expectation. He agreed with Cyrus Vance that people always evaluate their progress with friends. They were unhappy because their achievements are worse than the others. (p. 35) Therefore he reminded that people not to compare and assess their progress with others, because different people do different things ,will have different progress, as long as compliance with their own plans on the line. p. 35-36) But learning from observing others ,and the through others to observe their own, then the combined best quality of themselves and others. (p. 36) In my opinion, successful managers need to have a correct plan; in order to achieve their positive goals. I think Walter Vieira missed this important point. Therefore, they should know how to assess whether their plan is feasible under environment efforts . For example, their plan need to change when financial crisis. Leadership skills Leadership is the central factor to influence a general manager's success.In the book Successful Management, Neville (1995) believes that leadership has five main dimensions: first, defining a vision is the preliminary stage. The best leaders are adroit at thinking the unique and picking up creativity . They are not faint-hearted of change. When the vision has been ascertained, leaders should be attain it (17-8). Second, he said that commitment to success is not only about eagerness, drive and the will to win, it is also about the interminability of preparation (18). Third, he believes that leader should communicate unabashedly and frankly.Because communicate can let the team learn more experiences from each other (18). Fourth, is challenging in status quo. Young managers should find more innovative solutions in the business market (19). Finally, is about the personal characteristics which are found in extraordinary leaders. He highlights that flexibility, enthusiasm, integrity, willingness to experiment, ability to inspire others, to build relationship, to inspire trust, to communicate and to delegate, those are most routinely emphasize in outstanding leaders(19).He concludes that intellect is missing from these five dimensions, it is not nonessential, just because the role of leadership is too wide and it is not always necessary to have high intellect (19). To a large extent, I agree with Neville that the five main dimensions of leadership. I believe that communication is the most important part of the leadership, it permeates every aspect of the business. All managers, especially those leading teams, they communicating with the workforce is a demanding and rewarding task. Interpersonal talent and skills A successful businessman must possess good communication skills.He should provide different channels for the staff to express opinions which helps understandings of companyâ₠¬â„¢s objectives. Effective communication with employees can ensure thorough understandings of leader’s decisions and expectations on their work. Maintaining a comfortable working environment can  encourage staff to be more willing to talk to their senior level (219). Reduced status difference (219) and enhanced staff morale help collection of employees’ feedbacks and  suggestions, which contributes to company improvements and also help ensure all staffs are working towards same goals.I believe a successful businessman should be a good leader and team player. It is very important that a person can share his knowledge and experience with his employees. He should ensure his messages and decisions can be effectively convey to them. To build up a good relationship with staff, he should respect and encourage them to participate in production and provide new ideas in work. Appreciation of work participation creates good staff morale and motivates them to work towards sam e organizational goals. A person will not be successful if he cannot work well with his subordinates and employees.Even if he has lots of marvelous ideas but no one is willing to work with him, he will only be a talented individual instead of a successful leader in the company. Risk management skills Living with, and challenged by risks, Nick Jackson believed that risk has tightly stick onto the business management agenda. Just as the key, people who learn for robust the risk management can succeed in their business. No matter what risk are the leaders facing, the major challenge is that how to identify, tackle and monitor the risk, and to plan for understandable, maintainable and applicable contingency plan.Not much people could reach this in industry today, he thought, they can recognize the risks whereas fail to observe the dormant influence or lost the sight of controlling systems. There are only less than 30% organizations can manage the risk well even not in effective way. (p. 38) Moreover, Nick stated that the ‘risk’ would be in some positive terms instead of negative. He regarded that people should not only keep improving in the way they manage the difficulties, but also focus on how to avoid from the negative impacts of self-satisfied when they are in advantages. (p. 42)I have been reminded by Nick Jackson that in every single moment, we cannot slack off from supervise the risk in any format. It required the all-round critical thinking. We cannot stop monitor the business from tackled one risk. It is hard and tough to handle this section in the business. And it is a great challenge to all businessman or administrator. g Conclusion â€Å"The productivity of work is not the responsibility of the worker but of the manager. † – Peter F. Drucker, Expert Management Consultant. All in all, the five abilities which stated on the above passages are inseparable.These five categories has shown that how can the manager strengthen the pr oductivity and competitiveness in the industry. They can end up this topic with one sentence, â€Å"Manage yourself well then lead the team to work with communication and risk management plan smartly. † Within this project, we recognized that these skills are not only applicable in the business related industry, but also in different criteria of different industries. In 21st Century, people deserve better quality of services and the bosses deserve higher efficiency of works from their staff. As the result, this paper will be valuable to majority of worldReference Bain, Neville. , â€Å"Management or leadership? † Successful Management. 1st ed. London: Macmillan Press Ltd,14-9. Print C. N. Cheng. â€Å"Communication in Business. † Introduction to Business Studies. (Revised Edition). 1st ed. Hong Kong: Hong Kong Educational Publishing Co, 2009. 212-231. Print Kinicki, Angelo. , and Williams Brian K. â€Å"The nature of leadership. † Management: A Practical Introduction. 4th ed. Americas, New York: McGraw-Hill/Irwin, 2009. 436-38. Print. Nick, Jackson. â€Å"Risk is on the corporate agenda, but where does it fit? †Managing Business Risk. 3rd ed.Great Britain: Kogan Page Ltd,2006. Print Samson, Danny. , and Richard L. Daft. â€Å"Leadership in organisation. † Fundamentals of management. 2nd ed. South Melbourne, Victoria, Australia: Thomson Learning Australia, 2005. 427-31. Print. Susanne Madsen. â€Å"My Story:Work smarter not harder†Projectsmart. co. uk. Projectsmart ,10 September 2011. Web. 10 April 2012 Walter Vieira. Manager to CEO :corporate wisdom for survival and success. New Dehli/Thousand Oaks/London:Tejeshwar Singh for Response Book -stage Publications Inc,2005. print

Thursday, January 9, 2020

The functions of an Insurance Firm - Free Essay Example

Sample details Pages: 19 Words: 5637 Downloads: 7 Date added: 2017/06/26 Category Business Essay Type Analytical essay Did you like this example? The insurance firms functions making insurance products and attains profitability through charging premiums exceeding overall expenses of the firm and making wise investment decisions in maximizing returns under varied risk conditions. The method of charging premiums depends on various underlying factors such as number of policy holders, number of claims, amount of claims, health condition, age, gender of the policy holder and so on. Some of these factors such as loss aggregate claims and human mortality rates have adverse impact on determining the premium calculation to remain solvent. Don’t waste time! Our writers will create an original "The functions of an Insurance Firm" essay for you Create order Likewise, these factor need to be modelled using large amount of data, loads of simulations and complex algorithms to determine and manage risk. In this dissertation, we shall consider two important factors affecting the premiums, the aggregate loss claims and human mortality. We shall use theoretical simulations using R and use Danish loss insurance data to model aggregate claims. The Human Mortality Database (HMD)1 is used and smoothed human mortality rates are computed to price life insurance products respectively. In chapter 2, we shall examine the concepts of compounds distribution in modelling aggregate claim and perform simulations of the compound distribution using R packages such as MASS and Actuar. Finally, we shall analyse Danish loss insurance data from 1980 to 1990 and fit appropriate distributions using customized generically implemented R methods. In chapter 3 we shall explain briefly on concepts of graduation, generalised linear models and smoothing technique s using P-splines. We shall obtain deaths and exposure data from human mortality database for selected countries Sweden and Scotland and implement mortality rates smoothing using MortalitySmooth package. We compare the mortality rates based on various sets such as males and females for specific country or total mortality rates across countries like Sweden and Scotland for a given time frame ranging age wise or year wise. In chapter 4, we shall look into various life insurance and pension related products widely used in the insurance industry and construct life tables and commutation functions to implement annuity values. Finally, we shall provide the concluding comments of this dissertation in chapter 5. Chapter 2 Aggregate Claim distribution 2.1 Background Insurance based companies implement numerous techniques to evaluate the underlying risk of their assets, products and liabilities on a day- to-day basis for many purposes. These include Computation of premiums Initial reserving to cover the cost of future liabilities Maintain solvency Reinsurance agreement to protect from large claims In general, the occurrence of claims is highly uncertain and has underlying impact on each of the above. Thus modelling total claims is of high importance to ascertain risk. In this chapter, we shall define claim distributions and aggregate claims distributions and discuss some probabilistic distributions fitting the model. We also perform simulations, goodness of fit on data and conclude this chapter by fitting aggregate claim distribution to Danish fire loss insurance data. 2.2 Modelling Aggregate Claims The dynamics of insurance industry has different effects on the number of claims and amount of claims. For instance, expanding insurance business would have proportional increase on number of claims but negligible or no impact on amount of claims. Conversely, cost control initiatives, technology innovations have adverse effect on amount of claims but have zero effect on number of claims. Consequently, the aggregate claim is modelled based on the assumption that the number of claims occurring and amount of claims are modelled independently. 2.2.1 Compound distribution model We define compound distribution as follows S Random variable denoting the total claims occurring in a fixed period of time. Denote the claim amount representing the i-th claim. N Non-negative, independent random variable denoting number of claims occurring in a time. Further, is a sequence of i.i.d random variables with probability density function given by and cumulative density function by with probability of 0 is 1 for 1iN. Then we obtain the aggregate claims2 S as follows With Expectation and variance of S found as follows Thus S, the aggregate claims is computed using Collective Risk Model3 and follows compound distribution. 2.3 Compound Distributions for Aggregate Claims As discussed in Section 2.2, S follows compound distribution, were the number of claims (N) is the primary distribution and the amount of claims(X) being secondary distribution. In this Section we shall describe the three main compound distributions widely used to model aggregate claims models. The primary distribution can be modelled based on non-negative integer valued distributions like Poisson, binomial and negative binomial. The selection of a distribution depends from case to case. 2.3.1 Compound Poisson distribution The Poisson distribution is referred to distribution of occurrence of rare events, number of accidents per person, number of claims per insurance policy and numbers of defects found in product manufacturing are some of the real time examples of Poisson distribution. Here, the primary distribution N has a Poisson distribution denoted by N ~ P(ÃÆ'Ã… ½Ãƒâ€šÃ‚ » with parameter ÃÆ'Ã… ½Ãƒâ€šÃ‚ ». The probability density function, expectation and variance are given as follows for x=0,1. Then S has compound Poisson distribution with parameters ÃÆ'Ã… ½Ãƒâ€šÃ‚ » and denoted as follows S ~ CP(ÃÆ'Ã… ½Ãƒâ€šÃ‚ », and 2.3.2 Compound Binomial distribution The binomial distribution is referred to distribution of number of successes occurring in an event, the number of males in a company, number of defective components in random sample from a production process are real time examples representing this distribution. The compound binomial distribution is a natural choice to model aggregate claims when there is an upper limit on the number of claims in a given time period. Here, the primary distribution N has a binomial distribution with parameters n and p denoted by N ~ B(n,p. The probability density function, expectation and variance are given as follows For x=0,1,2.n Then S has compound binomial distribution with parameters n, p and denoted as follows S ~ CB(n, p , -p) 2.3.3 Compound Negative Binomial distribution The compound negative binomial distribution models aggregate claim models. The variance of negative binomial is greater than its mean and thus we can use negative binomial over Poisson distribution if the data has greater variance than its mean. This distribution provides a better fit to the data. Here, the primary distribution has a negative binomial distribution with parameters n and p denoted by N ~ NB(n,p with n0 and 0p1. The probability density function, expectation and variance are given as follows for x=0,1,2. Then S has a compound negative binomial distribution with parameters n, p and denoted as follows S ~ CNB(n,p, 2.4 Secondary Distributions Claim Amount Distributions. In previous Section 2.3, we defined the three different compound distributions widely used. In this section, we shall define the generally used distributions to model secondary distributions for claim amounts. We use positive skewed distributions. Some of these distributions include Weibull distribution used frequently in engineering applications. We shall also look into specific distributions such as Pareto and lognormal which are widely used to study loss distributions. 2.4.1 Pareto Distribution The distribution is named after Vilfredo Pareto4 who used it in modelling economic welfares. It is used these days to model income distribution in economics. The random variable X has a Pareto distribution with parameters and ÃÆ'Ã… ½Ãƒâ€šÃ‚ » where, ÃÆ'Ã… ½Ãƒâ€šÃ‚ » 0 and is denoted by X~ ( or X ~ Pareto(, ÃÆ'Ã… ½Ãƒâ€šÃ‚ ») The probability density function, expectation and variance are given as follows For x0 2.4.2 Log normal Distribution The random variable X has a Log normal distribution with parameters and where, 0 and is denoted by X ~ LN(, ), Where, and are the mean and variance of Log(X). The log normal distribution has a positive skew and is a very good distribution to model claim amounts. The probability density function, expectation and variance are given as follows For x0 and 2.4.3 Gamma distribution The gamma distribution is very useful to model claim amount distribution. The distribution has shape parameter , and scale parameter ÃÆ'Ã… ½Ãƒâ€šÃ‚ ». Then the random variable X has a Gamma distribution with parameters and ÃÆ'Ã… ½Ãƒâ€šÃ‚ » where, ÃÆ'Ã… ½Ãƒâ€šÃ‚ » 0 and is denoted by X~ ( or X ~ Gamma(, ÃÆ'Ã… ½Ãƒâ€šÃ‚ ») The probability density function, expectation and variance are given as follows For x0 2.4.4 Weibull Distribution The Weibull distribution is extreme valued distributions, because of its survival function it is used widely in modelling lifetimes. The random variable X has a Weibull distribution with parameters and ÃÆ'Ã… ½Ãƒâ€šÃ‚ » where, ÃÆ'Ã… ½Ãƒâ€šÃ‚ » 0 and is denoted by X~ ( The probability density function, expectation and variance are given as follows For x0 2.5 Simulation of Aggregate claims using R In Section 2.3 we discussed about aggregate claims and the various compound distributions used to model it. In this section we shall perform random simulation using R program. 2.5.1 Simulation using R The simulation of aggregate claims was implemented using packages like Actuar, MASS5. The generic R code available in Programs/Aggregate_Claims_Methods.r is given in Appendix 1 implements simulation of random generated aggregate claim of any compound distribution samples. The following R code below generates simulated aggregate claim data for Compound Poisson distribution with gamma as the claim distribution denoted by CP(10,. require(actuar) require(MASS) source(Programs/Aggregate_Claims_Methods.r) Sim.Sample = SimulateAggregateClaims (ClaimNo.Dist=pois, ClaimNo.Param =list(lambda=10),ClaimAmount.Dist=gamma,ClaimAmount.Param= list(shape = 1, rate = 1),No.Samples=2000 ) names(Sim.Sample) The SimulateAggregateClaims method in Programs/Aggregate_Claims_Methods.r generates and returns simulated aggregate samples along with expected and observed moments. The simulated data can then be used to perform various tests, comparisons and plots. 2.5.2 Comparison of Moments The moments of expected and observed are compared to test the correctness of the data. The following R code returns the expected and observed mean and variance of the simulated data Respectively. Sim.Sample$Exp.Mean;Sim.Sample$Exp.Variance Sim.Sample$Obs.Mean;Sim.Sample$Obs.Variance The Table 2.1 given below shows the simulated values for different sample size. Clearly the observed and expected moments are similar and the difference between them converges as number of sample increases. Samples size 100 1000 10000 100000 Observed Mean 10.431 09.953 10.008 09.986 Expected Mean 10 10 10 10 Observed Variance 20.72481 19.692 20.275 19.810 Expected Variance 20 20 20 20 Table 2.1 Comparison of observed and expected moments for different sample size. 2.5.3 Histogram with curve fitting distributions Histograms can provide useful information on skewness, information on extreme points in the data, the outliers and can be graphically measured or compared with shapes of standard distributions. The figure 2.1 below shows the fitted histogram of simulated data compared with standard distributions like Weibull, Normal, Lognormal and Gamma respectively. The function PlotAggregateClaimsData(Agg.Claims) is used to plot the histogram along with fitted standard distributions. The histogram is plotted by dividing them in to breaks of 50. The simulated data is then fitted using the fitdistr() function in the MASS package and fitted for various distributions like Normal,Lognormal,Gamma and Weibull distribution. The following R program describes how the fitdistr() function in MASS is used to compute the Gamma parameters and plot the respective curve as described in Figure 2.1 gamma = fitdistr(Agg.Claims,gamma) Shape = gamma$estimate[1] Rate= gamma$estimate[2] Scale=1/Rate Left = min(Agg.Claims) Right = max(Agg.Claims) Seq = seq(Left,Right,by= 0.01) lines(Seq,dgamma(Seq,shape=Shape, rate= Rate, scale=Scale), col = blue) Figure 2.1 Histogram of simulated aggregate claims with fitted standard distribution curves. 2.5.4 Goodness of fit The goodness of fit compare the closeness of expected and observed values to conclude whether it is reasonable to accept that the random sample fits a standard distribution or not. It is type of hypothesis testing were the hypotheses are defined as follows. : Data fits with the standard distribution : Data does not fit with the standard distribution The chi-square test is one of the ways to test goodness of fit6. The test uses histogram and compares it with the fitted density. It is used by grouping data into different intervals using k breaks. The breaks are computed using quantiles. This computes the expected frequency,. , the observed frequency is calculated using the product of difference of the c.d.f with sample size. The test statistic is defined as Where is the observed frequency and is expected frequency for k breaks respectively. To perform simulation we shall use breaks of 100 to split the data into equal cells of 100 and use histogram count to gro up the data based on the observed values. Large values of leads to rejecting null hypothesis The test statistic follows distribution with k-p-1 degrees of freedom where p is the number of parameters in the standard fitted distribution. The p-value is computed using 1- pchisq() and is accepted if p-value is greater than the significance level . The following R code computes chi-square test Test.ChiSq=PerformChiSquareTest( Samples.Claims= Sim.Sample$AggregateClaims,No.Samples=N.Samples) Test.ChiSq$DistName Test.ChiSq$X2Val;Test.ChiSq$pvalue Test.ChiSq$Est1; Test.ChiSq$Est2 Test Statistic Gamma Normal Lognormal Weibull 125.466 160.2884 439 91 p-value 5.609* 0 Table 2.2 Chi-Square and p-value for compound Poisson distribution The highest p-value signifies better fit of data with the standard distribution. In the above simulation, table 2.2 explains that Weibull distribution provides a better fit with the following para meters shape =2.348 and scale = 11.32, The eye-ball of the histogram confirms the same. 2.6 Fitting Danish Data 2.6.1 The Danish data source of information In this Section we shall use a statistical model and fit a compound distribution to compute aggregate claims using historical data. Fitting data into a probability distribution using R is an interesting exercise, and is worth quoting All models are wrong, some models are useful George E. P.; Norman R. Draper (1987). In previous section we explained fitting distribution, comparison of moments and goodness of fit to simulated data. The data source used is Danish data7 composed from Copenhagen Reinsurance and contains over 2000 fire loss claims details recorded during 1980 to 1990 period of time. This data is adjusted for inflation replicating 1985 values and are expressed in Danish Krone (DKK) currencies in millions. The data recorded are large values and are adjusted for inflation. There are 2167 rows of data over 11 years. Grouping the data over years results in 11 aggregate samples of data. This would be insufficient information to fit and plot the distribution. Therefore, the dat a is grouped month-wise aggregating to 132 samples. The figure 2.2 shows the time series plot against the aggregate claims inferring the different claims occurred monthly from 1980 to 1990, it also shows the extreme values of loss claims and the time of occurrence. There are no seasonal effects on the data as the 2 sample test for summer and winter data is compared and the t-test value infers there is no difference and conclude that there is no seasonal variation. Figure 2.2 Time series plot of Danish fire loss insurance data month wise starting 1980-1990. The expectation and variance of the aggregate claims are 55.572 and 1440.7 respectively. The expectation and variance of aggregate claims number are 16.41667 and 28.2. As discussed previously in Section 2.3.3, negative binomial distribution can be considered as a natural choice for modelling claim numbers since variance is greater than the mean. The data is plotted and fitted into an histogram using fitdistr() function in MA SS package of R. 2.6.2 Analysis of Danish data We shall do the following steps to analyse and fit the Danish loss insurance data. Obtain the claim numbers and loss aggregate claim data month wise. As discussed in Section 2.6.1, we choose primary distribution to be negative binomial and use fitdistr() function to obtain the parameters. Conduct Chi-square test to test the goodness of fit for claims distribution on aggregate claims and obtain the necessary parameters Simulate for 1000 samples using Section 2.5.1, also plot the histogram along with the fitted standard distributions as described in Section 2.5.2. Perform chi-square test to identify the optimal fit and obtain the distribution parameters. 2.6.3 R program Implementation We will do the following to implement Danish data fitting using R program. The following R code reads the Danish data available in DataDanishData.txt, segregate the claims month wise, to calculate sample mean and variance and plots the histogram with fitted standard distributions. require(MASS) source(Programs/Aggregate_Claims_Methods.r) Danish.Data = ComputeAggClaimsFromData(Data/DanishData.txt) Danish.Data$Agg.ClaimData = round(Danish.Data$Agg.ClaimData, digits = 0) mean(Danish.Data$Agg.ClaimData) var(Danish.Data$Agg.ClaimData) Danish.Data$Agg.ClaimData mean(Danish.Data$Agg.ClaimNos) var(Danish.Data$Agg.ClaimNos) Figure 2.3 Actual Danish fire loss data fitted with standard distributions of 132 samples. In the initial case N has negative binomial distribution with parameter; k= 25.32 and p=.6067. Test Statistic Gamma Normal Lognormal Weibull 95.273 142.243 99.818 118 p-value .53061 .0019 .40199 .072427 T able 23 Chi-Square and p-value for Danish fire loss insurance data Based on chi-square goodness of fit test shown in table 2.3, we shall consider the secondary distribution as gamma distribution with parameters; Shape =3.6559 and scale = 15.21363. We simulate using 1000 samples and obtain aggregate claim samples using Section 2.5.1. The plot and chi square test values are defined below as follows. The generic function PerformChiSquareTest(), previously discussed in Section 2.4 is used here to compute values of and p-value pertaining to = distribution. Figure 2.4 Histogram of simulated samples of Danish data fitted with standard distributions The figure 2.4 above shows simulated samples of Danish data calculated for sample size 1000, it also shows the different distribution curves fitted to the simulated data. The chi-square values are tabulated in table 2.4 below. Test Statistic Normal Gamma Lognormal Weibull 123.32 84.595 125.75 115.50 p-va lue .036844 .8115 .02641 .09699 Table 2.4 Chi-Square and p-value for compound Negative Binomial distribution for Danish insurance loss data. The results described in Table 2.4 suggest that the optimal possible choice of model is Gamma distribution with parameters Shape = 8.446 and Rate = .00931 Chapter 3 Survival models Graduation In the previous Chapter 2, we discussed about aggregate claims and how it can be modelled and simulated using R programming. In this chapter, we shall discuss on one of the important factors leading to the occurrence of a claim, the human mortality. Life insurance companies use this factor to model risk arising out of claims. We shall analyse and investigate the crude data presented in human mortality database for specific countries like Scotland and Sweden and use statistical techniques in smoothing data. MortalitySmooth package is used in smoothing the data based on Bayesian information criterion BIC, a technique used to determine smoothing parameter; we shall also plot the data. Finally we shall conclude by performing comparison of mortality of two countries based on time. 3.1 Introduction Mortality data in simple terms is recording of deaths of species defined in a specific set. This collection of data could vary based on different variables or sets such as sex, age, years, geographical location and beings. In this section we shall use human data grouped based on population of countries, sex, ages and years. Human mortality in urban nations has improved significantly over the past few centuries. This has attributed largely due to improved standard of living and national health services to the public, but in latter decades there has been tremendous improvement in health care which has made strong demographic and actuarial implications. Here we use human mortality data and analyse mortality trend compute life tables and price different annuity products. 3.2 Sources of Data Human mortality database (HMD)1 is used to extract data related to deaths and exposure. These data are collected from national statistical offices. In this dissertation, we shall look into two countries Sweden and Scotland data for specific ages and years. The data for specific countries Sweden and Scotland are downloaded. The deaths and exposure data is downloaded from HMD under Sweden Deaths https://www.mortality.org/hmd/SWE/STATS/Deaths_1x1.txt Exposure https://www.mortality.org/hmd/SWE/STATS/Exposures_1x1.txt Scotland Deaths https://www.mortality.org/hmd/GBR_SCO/STATS/Deaths_1x1.txt Exposure https://www.mortality.org/hmd/GBR_SCO /STATS/Exposures_1x1.txt They are downloaded and saved as .txt data files in the directory under /Data/Conutryname_deaths.txt and /Data/Conutryname_exposures.txt respectively. In general the data availability and formats vary over countries and time. The female and male death and exposure data are shared from raw data. The total column in the data source is calculated using weighted average based on the relative size of the two groups male and female at a given time. 3.3 P-Splines Techniques in Smoothing Data. A well-known actuary, Benjamin Gompertz observed that over a long period of human life time, the force of mortality increases geometrically with age. This was modelled for single year of life. The Gompertz model is linear on the log scale. The Gompertz law8 states that the mortality rate increases in a geometric progression. Hence when death rates are A0 B1 And the linear model is fitted by taking log both sides. = a + bx Where a = and b = The corresponding quadratic model is given as follows 3.3.1 Generalized Linear models and P-Splines in smoothing data Generalized Linear Models (GLM) are an extension of the linear models that allows models to be fit to data that follow probability distributions like Poisson, binomial, and etc. If is the number of deaths at age x and is central exposed to risk then By maximum likelihood estimate we have and by GLM, follows Poisson distribution denoted by with a + bx We shall use P-splines techniques9 in smoothing the data. As mentioned above, the GLM with number of deaths follows Poisson distribution. We fit a quadratic regression using exposure as the offset parameter. The splines are piecewise polynomials usually cubic and they are joined using the property of second derivatives being equal at those points, these joints are defined as knots to fit data. It uses B-splines regression matrix. A penalty function of order linear or quadratic or cubic is used to penalize the irregular behaviour of data by placing a penalty difference. This function is then used in the log likelihood along with smoothing parameter .The equations are maximised to obtain smoothing data. Larger the value of implies smoother is the function but more deviance. Thus, optimal value of is chosen to balance deviance and model complexity. is evaluated using various techniques such as BIC Bayesian information criterion and AIC Akaikes information criterion techniques. Mortalitysmooth package in R implements the techniques mentioned above in smoothing data, There are different options or choices to smooth data using P-splines, The number of knots ndx ,the degree of p-spine whether linear, quadratic or cubic bdeg and the smoothing parameter lambda. The methods in MortalitySmooth package fit a P-splines model with equally-spaced B-splines along x axis. There are four possible methods in this package to smooth data, and BIC is the default value chosen by MortalitySmooth in smoothing data. AIC minimization is also available but BIC provides better outcome for large values. In this di ssertation, we shall smooth the data using default option BIC and using lambda value. 3.4 MortalitySmooth Package in R program implementation In this section we describe the generic implementation of using R programming to read deaths and exposure data from human mortality database and use MortalitySmooth10 package to smooth the data based on P-splines. The following code presents below loads the following require(MortalitySmooth) source(Programs/Graduation_Methods.r) Age -30:90; Year - 1959:1999 country -scotland ;Sex - Males death =LoadHMDData(country,Age,Year,Deaths,Sex ) exposure =LoadHMDData(country,Age,Year,Exposures,Sex ) FilParam.Val -40 Hmd.SmoothData =SmoothedHMDDataset(Age,Year,death,exposure) XAxis - Year YAxis-log(fitted(Hmd.SmoothData$Smoothfit.BIC)[Age==FilParam.Val,]/exposure[Age==FilParam.Val,]) plotHMDDataset(XAxis ,log(death[Age==FilParam.Val,]/exposure[Age==FilParam.Val,]) ,MainDesc,Xlab,Ylab,legend.loc ) DrawlineHMDDataset(XAxis , YAxis ) The MortalitySmooth package is loaded and the generic implementation of methods to execute graduation smoothening is avail able in Programs/Graduation_Methods.r. The step by step description of the code is explained below. Step:1 Load Human Mortality data Method Name LoadHMDData Description Return an object of matrix type which is a mxn dimension with m representing number of ages and n representing number of years. This object is specifically formatted to be used in Mortality2Dsmooth function. Implementation LoadHMDData(Country,Age,Year,Type,Sex) Arguments Country Name of the country for which data to be loaded. If country is Denmark,Sweden,Switzerland or Japan the SelectHMDData function of MortalitySmooth package is called internally. Age Vector for the number of rows defined in the matrix object. There must be at least one value. Year Vector for the number of columns defined in the matrix object. There must be at least one value. Type A value which specifies the type of data to be loaded from Human mortality database. It can take values as Deaths or Exposures Sex An optional filter value based on which data is loaded into the matrix object. It can take values Males, Females and Total. Default value being Total Details The method LoadHMDData in Programs/Graduation_Methods.r reads the data available in the directory named Data to load deaths or exposure for the given parameters. The data can be filtered based on country, age, year, type based on Deaths or Exposures and lastly by sex. Figure: 3.1 Format of matrix objects Death and Exposure for Scotland with age ranging from 30 to 90 and year 1959 to 1999 The Figure 3.1 shows the format used in objects Death and Exposure to store data. A matrix object representing Age in rows and Years in column. The MortalitySmooth package functions only for specific countries listed in the package. They are Denmark,Switzerland,Sweden and Japan. The data for these countries can be directly loaded by using SelectHMDData() function available in MortalitySmooth R package. LoadHMDData function checks the value of the variable country and if Country is equal to any of the four then SelectHMDData() function is implemented else customized generic function i s called to return the data objects. The return matrix objects format in both functions remains exactly the same. Step 2: Smoothing HMD Dataset Method Name SmoothedHMDDataset Description Returns a list of smoothed object based BIC and Lambda of matrix object type which are of mxn dimension with m representing number of Ages and n representing number of years. These object are specifically formatted to be used in Mortality2Dsmooth() function and are customized for mortality data only. Smoothfit.BIC and Smoothfit.fitLAM objects are returned along with fitBIC.Data fitted values. SmoothedHMDDataset (Xaxis,YAxis,ZAxis,Offset.Param) Arguments Xaxis Vector for the abscissa of data used in the function Mortality2Dsmooth in MortalitySmooth package in R. Here, age vector is value of XAxis. Yaxis Vector for the ordinate of data used in the function Mortality2Dsmooth in MortalitySmooth package in R. Here, year vector is value of YAxis. ZAxis Matrix count response used in the function Mortality2Dsmooth in MortalitySmooth package in R. Here, Death is the matrix object value for ZAxis and dimensions of ZAxis must correspond to the length of XAxis and YAxis. Offset.Param A Matrix with prior known values to be included in the linear predictor during fitting the 2d data. Details. The method SmoothedHMDDataset in Programs/Graduation_Methods.r smoothes the data based on the death and exposure objects loaded as defined above in step 1. The Age, year and death are loaded as x-axis, y-axis and z-axis respectively with exposure as the offset parameter. These parameters are internally fitted in Mortality2Dsmooth function available in MortalitySmooth package in smoothing the data. Step3: plot the smoothed data based on user input Method Name PlotHMDDataset Description Plots the smoothed object with user given information such as axis, legend, axis scale and ain description details. Implementation PlotHMDDataset (Xaxis,YAxis,MainDesc,Xlab,Ylab,legend.loc,legend.Val,Plot.Type,Ylim) Arguments Xaxis Vector for plotting X axis value. Here the value would be age or year based on user request. Yaxis Vector for plotting Y axis value. Here the value would be Smoothened log mortality vales filtered for a particular age or year. MainDesc Main plot caption describing about the plot. Xlab X axis label. Ylab Y axis label. legend.loc A customized location of legend. It can take values topright,topleft legend.Val A customized legend description details it can take vector values of type string. Val,Plot.Type An optional value to change plot type. Here default value is equal to default value set in the plot. If value =1, then figure with line is plotted Ylim An optional value to set the height of the Y axis, by default takes max value of vector Y values. Details The generic method PlotHMDDataset in Programs/Graduation_Methods.r plots the smoothened fitted mortality values with an option to customize based on user inputs. The generic method DrawlineHMDDataset in Programs/Graduation_Methods.r plots the line. usually called after PlotHMDDataset method. 3.5 Graphical Representation of Smoothed Mortality Data. In this section we shall look into graphical representation of mortality data for selected countries Scotland and Sweden. The generic program discussed in previous Section 3.4 is used to implement the plot based on user inputs. Log mortality of smoothed data vs. actual fit for Sweden. Figure 3.3 Left panel: Plot of Year vs. log(Mortality) for Sweden based on age 40 and year from 1945 to 2005. The points represent real data and red and blue curves represent smoothed fitted curves for BIC and Lambda =10000 respectively. Right panel: Plot of Age vs. log(Mortality) for Sweden based on year 1995 and age from 30 to 90. The points represent real data red and blue curves represent smoothed fitted curves for BIC and Lambda =10000 respectively. Figure 3.3 describes plot of smoothed mortality vs. actual data for Sweden for ages and years respectively. The actual data are displayed as points and red and blue represents the smoothed curves BIC and lambda. MortalitySmooth package uses default smoothing technique BIC and lambda=10000 to smooth data in two different ways. Log mortality of smoothed data vs. actual fit for Scotland Figure 3.4 Left panel: Plot of Year vs. log(Mortality) for Scotland based on age 40 and year from 1945 to 2005. The points represent real data and red and blue curves represent smoothed fitted curves for BIC and Lambda =10000 respectively. Right panel: Plot of Age vs. log(Mortality) for Scotland based on year 1995 and age from 30 to 90. The points represent real data red and blue curves represent smoothed fitted curves for BIC and Lambda =10000 respectively. Figure 3.4 describes plot of smoothed mortality vs. actual data for Scotland for ages and years respectively. The actual data are displayed as points and red and blue represents the smoothed curves BIC and lambda. MortalitySmooth package uses default smoothing technique BIC and lambda=10000 are set to adjust the smoothing parameter. Log mortality of Females Vs Males for Sweden The Figure 3.5 given below represents the mortality rate for males and females in Sweden for age wise and year wise. 3.5 Left panel reveals that the mortality of male is more than the female over the years and has been a sudden increase of male mortality from mid 1960s till late 1970s for male The life expectancy for Sweden male in 1960 is 71.24 vs. 74.92 for women and it had been increasing for women to 77.06 and just 72.2 for male in the next decade which explains the trend11. The 3.5 Right panel shows the male mortality is more than the female mortality for the year 1995, The sex ratio for male to female is 1.06 at birth and has been consistently decreasing to 1.03 during 15-64 and .79 over 65 and above clearly explaining the trend for Sweden mortality rate increase in males12 is more than in females. Figure 3.5 Left panel: Plot of Year vs. log(Mortality) for Sweden based on age 40 and year from 1945 to 2005. The red and blue points represent real data for males an d females respectively and red and blue curves represent smoothed fitted curves for BIC males and BIC females respectively. Right panel: Plot of Age vs. log(Mortality) for Sweden based on year 2000 and age from 25 to 90. The red and blue points represent real data for males and females respectively and red and blue curves represent smoothed fitted curves for BIC males and BIC females respectively. Log mortality of Females Vs Males for Scotland The figure 3.6 Left panel describes consistent dip in mortality rates but there has been a steady increase in mortality rates of male over female for a long period starting mid 1950s and has been steadily increasing for people of age 40 years. The 3.6 Right panel shows the male mortality is more than the female mortality for the year 1995, The sex ratio for male to female is 1.04 at birth and has been consistently decreasing to .94 during 15-64 and .88 over 65 and above clearly explaining the trend for Scotland mortality rate13 increase in males is more than in females. Figure 3.6 Left panel: Plot of Year vs. log(Mortality) for Scotland based on age 40 and year from 1945 to 2005. The red and blue points represent real data for males and females respectively and red and blue curves represent smoothed fitted curves for BIC males and BIC females respectively. Right panel: Plot of Age vs. log(Mortality) for Scotland based on year 2000 and age from 25 to 90. The red and blue points represent real data for males and females respectively and red and blue curves represent smoothed fitted curves for BIC males and BIC females respectively. Log mortality of Scotland Vs Sweden The figure 3.7 Left Panel shows that the mortality rates for Scotland are more than Sweden and there has been consistent decrease in mortality rates for Sweden beginning mid 1970s where as Scotland mortality rates though decreased for a period started to show upward trend, this could be attributed due to change in living conditions. Figure 3.7 Left panel: Plot of Year vs. log(Mortality) for countries Sweden and Scotland based on age 40 and year from 1945 to 2005. The red and blue points represent real data for Sweden and Scotland respectively and red and blue curves represent smoothed fitted curves for BIC Sweden and BIC Scotland respectively. Right panel: Plot of Year vs. log(Mortality) for countries Sweden and Scotland based on year 2000 and age from 25 to 90. The red and blue points represent real data for Sweden and Scotland respectively and red and blue curves represent smoothed fitted curves for BIC Sweden and BIC Scotland respectively.

Wednesday, January 1, 2020

The modern assembler - Free Essay Example

Sample details Pages: 15 Words: 4404 Downloads: 8 Date added: 2017/06/26 Category Statistics Essay Did you like this example? Q1. Design specification of an assembler with diagram? Ans. Typically a modern assembler creates object code by translating assembly instruction mnemonics into opcodes, and by resolving symbolic names for memory locations and other entities. The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include macro facilities for performing textual substitutionà ¢Ã¢â€š ¬Ã¢â‚¬ e.g., to generate common short sequences of instructions to run inline, instead of in a subroutine. Assemblers are generally simpler to write than compilers for high-level languages, and have been available since the 1950s. Modern assemblers, especially for RISC based architectures, such as MIPS, Sun SPARC, and HP PA-RISC, as well as x86(-64), optimize instruction scheduling to exploit the CPU pipeline efficiently. Don’t waste time! Our writers will create an original "The modern assembler" essay for you Create order There are two types of assemblers based on how many passes through the source are needed to produce the executable program. One-pass assemblers go through the source code once and assumes that all symbols will be defined before any instruction that references them. Two-pass assemblers (and multi-pass assemblers) create a table with all unresolved symbols in the first pass, then use the 2nd pass to resolve these addresses. The advantage in one-pass assemblers is speed which is not as important as it once was with advances in computer speed and capabilities. The advantage of the two-pass assembler is that symbols can be defined anywhere in the program source. As a result, the program can be defined in a more logical and meaningful way. This makes two-pass assembler programs easier to read and maintain. More sophisticated high-level assemblers provide language abstractions such as: Advanced control structures High-level procedure/function declarations and invocations High-level abstract data types, including structures/records, unions, classes, and sets Sophisticated macro processing Object-Oriented features such as encapsulation, polymorphism, inheritance, interfaces Note that, in normal professional usage, the term assembler is often used ambiguously: It is frequently used to refer to an assembly language itself, rather than to the assembler utility. Thus: CP/CMS was written in S/360 assembler as opposed to ASM-H was a widely-used S/370 assembler. Assembly language A program written in assembly language consists of a series of instructionsmnemonics that correspond to a stream of executable instructions, when translated by an assembler, that can be loaded into memory and executed. For example, an x86/IA-32 processor can execute the following binary instruction as expressed in machine language: Binary: 10110000 01100001 (Hexadecimal: B0 61) The equivalent assembly language representation is easier to remember (example in Intel syntax, more mnemonic): MOV AL, #61h This instruction means: Move the value 61h (or 97 decimal; the h-suffix means hexadecimal; the hash sign means move the immediate value, not location) into the processor register named AL. The mnemonic mov represents the opcode 1011 which moves the value in the second operand into the register indicated by the first operand. The mnemonic was chosen by the instruction set designer to abbreviate move, making it easier for the programmer to remember. A comma-separated list of arguments or parameters follows the opcode; this is a typical assembly language statement. In practice many programmers drop the word mnemonic and, technically incorrectly, call mov an opcode. When they do this they are referring to the underlying binary code which it represents. To put it another way, a mnemonic such as mov is not an opcode, but as it symbolizes an opcode, one might refer to the opcode mov for example when one intends to refer to the binary opcode it symbolizes rather than to the symbol the mnemonic itself. As few modern programmers have need to be mindful of actually what binary patterns are the opcodes for specific instructions, the distinction has in practice become a bit blurred among programmers but not among processor designers. Transforming assembly into machine language is accomplished by an assembler, and the reverse by a disassembler. Unlike in high-level languages, there is usually a one-to-one correspondence between simple assembly statements and machine language instructions. However, in some cases, an assembler may provide pseudoinstructions which expand into several machine language instructions to provide commonly needed functionality. For example, for a machine that lacks a branch if greater or equal instruction, an assembler may provide a pseudoinstruction that expands to the machines set if less than and branch if zero (on the result of the set instruction). Most full-featured assemblers also provide a rich macro language (discussed below) which is used by vendors and programmers to generate more complex code and data sequences. Each computer architecture and processor architecture has its own machine language. On this level, each instruction is simple enough to be executed using a relatively small number of electronic circuits. Computers differ by the number and type of operations they support. For example, a new 64-bit machine would have different circuitry from a 32-bit machine. They may also have different sizes and numbers of registers, and different representations of data types in storage. While most general-purpose computers are able to carry out essentially the same functionality, the ways they do so differ; the corresponding assembly languages reflect these differences. Multiple sets of mnemonics or assembly-language syntax may exist for a single instruction set, typically instantiated in different assembler programs. In these cases, the most popular one is usually that supplied by the manufacturer and used in its documentation. Language design Basic elements Any Assembly language consists of 3 types of instruction statements which are used to define the program operations: opcode mnemonics data sections assembly directives Opcode mnemonics Instructions (statements) in assembly language are generally very simple, unlike those in high-level languages. Generally, an opcode is a symbolic name for a single executable machine language instruction, and there is at least one opcode mnemonic defined for each machine language instruction. Each instruction typically consists of an operation or opcode plus zero or more operands. Most instructions refer to a single value, or a pair of values. Operands can be either immediate (typically one byte values, coded in the instruction itself) or the addresses of data located elsewhere in storage. This is determined by the underlying processor architecture: the assembler merely reflects how this architecture works. Data sections There are instructions used to define data elements to hold data and variables. They define what type of data, length and alignment of data. These instructions can also define whether the data is available to outside programs (programs assembled separately) or only to the program in which the data section is defined. Assembly directives / pseudo-ops Assembly Directives are instructions that are executed by the Assembler at assembly time, not by the CPU at run time. They can make the assembly of the program dependent on parameters input by the programmer, so that one program can be assembled different ways, perhaps for different applications. They also can be used to manipulate presentation of the program to make it easier for the programmer to read and maintain. (For example, pseudo-ops would be used to reserve storage areas and optionally their initial contents.) The names of pseudo-ops often start with a dot to distinguish them from machine instructions. Some assemblers also support pseudo-instructions, which generate two or more machine instructions. Symbolic assemblers allow programmers to associate arbitrary names (labels or symbols) with memory locations. Usually, every constant and variable is given a name so instructions can reference those locations by name, thus promoting self-documenting code. In executable code, the name of each subroutine is associated with its entry point, so any calls to a subroutine can use its name. Inside subroutines, GOTO destinations are given labels. Some assemblers support local symbols which are lexically distinct from normal symbols (e.g., the use of 10$ as a GOTO destination). Most assemblers provide flexible symbol management, allowing programmers to manage different namespaces, automatically calculate offsets within data structures, and assign labels that refer to literal values or the result of simple computations performed by the assembler. Labels can also be used to initialize constants and variables with relocatable addresses. Assembly languages, like most other computer languages, allow comments to be added to assembly source code that are ignored by the assembler. Good use of comments is even more important with assembly code than with higher-level languages, as the meaning and purpose of a sequence of instructions is harder to decipher from the code itself. Wise use of these facilities can greatly simplify the problems of coding and maintaining low-level code. Raw assembly source code as generated by compilers or disassemblers à ¢Ã¢â€š ¬Ã¢â‚¬  code without any comments, meaningful symbols, or data definitions à ¢Ã¢â€š ¬Ã¢â‚¬  is quite difficult to read when changes must be made. Macros Many assemblers support macros, programmer-defined symbols that stand for some sequence of text lines. This sequence of text lines may include a sequence of instructions, or a sequence of data storage pseudo-ops. Once a macro has been defined using the appropriate pseudo-op, its name may be used in place of a mnemonic. When the assembler processes such a statement, it replaces the statement with the text lines associated with that macro, then processes them just as though they had appeared in the source code file all along (including, in better assemblers, expansion of any macros appearing in the replacement text). Since macros can have short names but expand to several or indeed many lines of code, they can be used to make assembly language programs appear to be much shorter (require less lines of source code from the application programmer as with a higher level language). They can also be used to add higher levels of structure to assembly programs, optionally introduce embedded de-bugging code via parameters and other similar features. Many assemblers have built-in macros for system calls and other special code sequences. Macro assemblers often allow macros to take parameters. Some assemblers include quite sophisticated macro languages, incorporating such high-level language elements as optional parameters, symbolic variables, conditionals, string manipulation, and arithmetic operations, all usable during the execution of a given macros, and allowing macros to save context or exchange information. Thus a macro might generate a large number of assembly language instructions or data definitions, based on the macro arguments. This could be used to generate record-style data structures or unrolled loops, for example, or could generate entire algorithms based on complex parameters. An organization using assembly language that has been heavily extended using such a macro suite can be considered to be working in a higher-level language, since such programmers are not working with a computers lowest-level conceptual elements. Macros were used to customize large scale software systems for specific customers in the mainframe era and were also used by customer personnel to satisfy their employers needs by making specific versions of manufacturer operating systems; this was done, for example, by systems programmers working with IBMs Conversational Monitor System/Virtual Machine (CMS/VM) and with IBMs real time transaction processing add-ons, CICS, Customer Information Control System, and ACP/TPF, the airline/financial system that began in the 1970s and still runs many large Global Distribution Systems (GDS) and credit card systems today. It was also possible to use solely the macro processing capabilities of an assembler to generate code written in completely different languages, for example, to generate a version of a program in Cobol using a pure macro assembler program containing lines of Cobol code inside assembly time operators instructing the assembler to generate arbitrary code. This was because, as was realized in the 1970s, the concept of macro processing is independent of the concept of assembly, the former being in modern terms more word processing, text processing, than generating object code. The concept of macro processing in fact appeared in and appears in the C programming language, which supports preprocessor instructions to set variables, and make conditional tests on their values. Note that unlike certain previous macro processors inside assemblers, the C preprocessor was not Turing-complete because it lacked the ability to either loop or go to, the latter allowing the programmer to loop. Despite the power of macro processing, it fell into disuse in high level languages while remaining a perennial for assemblers. This was because many programmers were rather confused by macro parameter substitution and did not disambiguate macro processing from assembly and execution. Macro parameter substitution is strictly by name: at macro processing time, the value of a parameter is textually substituted for its name. The most famous class of bugs resulting was the use of a parameter that itself was an expression and not a simple name when the macro writer expected a name. In the macro: foo: macro a load a*b the intention was that the caller would provide the name of a variable, and the global variable or constant b would be used to multiply a. If foo is called with the parameter a-c, an unexpected macro expansion occurs. To avoid this, users of macro processors learned to religiously parenthesize formal parameters inside macro definitions, and callers had to do the same to their actual parameters. PL/I and C feature macros, but this facility was underused or dangerous when used because they can only manipulate text. On the other hand, homoiconic languages, such as Lisp, Prolog, and Forth, retain the power of assembly language macros because they are able to manipulate their own code as data. Support for structured programming Some assemblers have incorporated structured programming elements to encode execution flow. The earliest example of this approach was in the Concept-14 macro set, originally proposed by Dr. H.D. Mills (March, 1970), and implemented by Marvin Kessler at IBMs Federal Systems Division, which extended the S/360 macro assembler with IF/ELSE/ENDIF and similar control flow blocks.[3] This was a way to reduce or eliminate the use of GOTO operations in assembly code, one of the main factors causing spaghetti code in assembly language. This approach was widely accepted in the early 80s (the latter days of large-scale assembly language use). A curious design was A-natural, a stream-oriented assembler for 8080/Z80 processors from Whitesmiths Ltd. (developers of the Unix-like Idris operating system, and what was reported to be the first commercial C compiler). The language was classified as an assembler, because it worked with raw machine elements such as opcodes, registers, and memory references; but it incorporated an expression syntax to indicate execution order. Parentheses and other special symbols, along with block-oriented structured programming constructs, controlled the sequence of the generated instructions. A-natural was built as the object language of a C compiler, rather than for hand-coding, but its logical syntax won some fans. There has been little apparent demand for more sophisticated assemblers since the decline of large-scale assembly language development.In spite of that, they are still being developed and applied in cases where resource constraints or peculiarities in the target systems architecture prevent the effective use of higher-level languages.[ Q2. Assembler converts mnemonic code in to machine understandable form? Ans. A simple instruction in assembly language such as add 4 and 5 may look like 00111101001 in machine language . How computer realize which is add instruction and which are numbers in the set of above binary numbers ? actually an instruction like add 4 and 5 would translate into something like MVI R1, 4 MVI R2, 5 ADD R1, R2 in assembly It would then be translated into its sequence of opcodes as 00111100 00111101 01111100 now assume that the electronic circuit of your processor looks at high bits as a high voltage of say 5V and the low bits as 0 V so as soon as the 1st instruction is read (00111100), into memory the relevant electronic voltages are generated and 4 gets stored in Register R1. Similarly 5 gets stored in register R2. The last instruction selects the adder circuit and passes the contents of the two registers to it as input which then outputs the sum which then gets stored in one register as well. Q3. Machine language and Assembly language both are low level languages but still not similar. Give examples? Write a program to display your name using any of machine languages? Ans. In computer science, a low-level programming language is a language that provides little or no abstraction from a computers instruction set architecture. The word low refers to the small or nonexistent amount of abstraction between the language and machine language; because of this, low-level languages are sometimes described as being close to the hardware. A low-level language does not need a compiler or interpreter to run; the processor for which the language was written is able to run the code without using either of these. By comparison, a high-level programming language isolates the execution semantics of a computer architecture from the specification of the program, making the process of developing a program simpler and more understandable. Low-level programming languages are sometimes divided into two categories: first generation, and second generation. First generation The first-generation programming language, or 1GL, is machine code. It is the only language a microprocessor can understand directly. Currently, programmers almost never write programs directly in machine code, because not only does it (like assembly language) require attention to numerous details which a high-level language would handle automatically, but it also requires memorizing or looking up numerical codes for every instruction that is used. For this reason, second generation programming languages abstract the machine code one level. Example: A function in 32-bit x86 machine code to calculate the nth Fibonacci number: 8B542408 83FA0077 06B80000 0000C383 FA027706 B8010000 00C353BB 01000000 B9010000 008D0419 83FA0376 078BD98B C84AEBF1 5BC3 Second generation The second-generation programming language, or 2GL, is assembly language. It is considered a second-generation language because while it is not a microprocessors native language, an assembly language programmer must still understand the microprocessors unique architecture (such as its registers and instructions). These simple instructions are then assembled directly into machine code. The assembly code can also be abstracted to another layer in a similar manner as machine code is abstracted into assembly code. Example: The same Fibonacci number calculator as above, but in x86 assembly language using MASM syntax: fib: mov edx, [esp+8] cmp edx, 0 ja @f mov eax, 0 ret @@: cmp edx, 2 ja @f mov eax, 1 ret @@: push ebx mov ebx, 1 mov ecx, 1 @@: lea eax, [ebx+ecx] cmp edx, 3 jbe @f mov ebx, ecx mov ecx, eax dec edx jmp @b @@: pop ebx ret Assembly languages are close to a one to one correspondence between symbolic instructions and executable machine codes. Assembly languages also include directives to the assembler, directives to the linker, directives for organizing data space, and macros. Macros can be used to combine several assembly language instructions into a high level language-like construct (as well as other purposes). There are cases where a symbolic instruction is translated into more than one machine instruction. But in general, symbolic assembly language instructions correspond to individual executable machine instructions. High level languages are abstract. Typically a single high level instruction is translated into several (sometimes dozens or in rare cases even hundreds) executable machine language instructions. Some early high level languages had a close correspondence between high level instructions and machine language instructions. For example, most of the early COBOL instructions translated into a very obvious and small set of machine instructions. The trend over time has been for high level languages to increease in abstraction. Modern object oriented programming languages are highly abstract (although, interestingly, some key object oriented programming constructs do translate into a very compact set of machine instructions). Assembly language is much harder to program than high level languages. The programmer must pay attention to far more detail and must have an intimate knowledge of the processor in use. But high quality hand crafted assembly language programs can run much faster and use much less memory and other resources than a similar program written in a high level language. Speed increases of two to 20 times faster are fairly common, and increases of hundreds of times faster are occassionally possible. Assembly language programming also gives direct access to key machine features essential for implementing certain kinds of low level routines, such as an operating system kernel or microkernel, device drivers, and machine control. High level programming languages are much easier for less skilled programmers to work in and for semi-technical managers to supervise. And high level languages allow faster development times than work in assembly language, even with highly skilled programmers. Development time increases of 10 to 100 times faster are fairly common. Programs written in high level languages (especially object oriented programming languages) are much easier and less expensive to maintain than similar programs written in assembly language (and for a successful software project, the vast majority of the work and expense is in maintenance, not initial development). Q4. Assembler can perform operations of search and sort? Give your comments? Ans. Assemblers can perform operations of search and sort. Below are the examples: Sorting: Again: MOV FLAG, 0 ; FLAGà ¢Ã¢â‚¬  ? 0 Next: MOV AL, [BX] CMP AL, [BX+1] ; Compare current and next values JLE Skip ; Branch if current XCHG AL, [BX+1] ; If not, Swap the contents of the MOV [BX+1], AL ; current location with the next one MOV FLAG, 1 ; Indicate the swap Skip: INC BX ; BXà ¢Ã¢â‚¬  ? BX +1 LOOP Next ; Go to next value CMP FLAG, 1 ; Was there any swap JE Again ; If yes Repeat process RET Searching: MOV FLAG, 0 ; FLAGà ¢Ã¢â‚¬  ? 0 Next: CMP AX, [BX + SI] ; Compare current value to VAL JE Found ; Branch if equal ADD SI, 2 ; SIà ¢Ã¢â‚¬  ? SI + 2, next value LOOP Next ; Go to next value JMP Not_Found Found: MOV FLAG, 1 ; Indicate value found MOV POSITION, SI ; Return index of value in list Not_Found: RET Q5. Assemblers which can be developed? Ans. The following information describes some of the changes that are specific to assembler programs: In the TPF 4.1 system, assembler programs were limited to 4 KB in size; in the z/TPF system, assembler programs can be larger than 4 KB. To exploit this capability, you can change your assembler programs to use: o The CLINKC, RLINKC, and SLINKC assembler linkage macros o Multiple base registers o Baseless instructions. * You can use the CALLC general macro in assembler programs to call C language functions. * In the TPF 4.1 system, the TMSPC and TMSEC macros were provided to set up the interface between C language programs and macro service routines written in assembler language. In the z/TPF system, the PRLGC and EPLGC macros set up this interface by simulating the prolog and epilog code generated by the GCC. The PRLGC and EPLGC macros were provided on the TPF 4.1 system through APAR PJ29640 so that new C library functions written on the TPF 4.1 system can be migrated with little or no changes; and the TMSPC and TMSEC macros are still supported on the z/TPF system so that library functions that were already coded with those macros can be migrated with little or no code changes. New library functions that are developed for z/TPF system must be coded with the PRLGC and EPLGC macros. Q6. Problems which can be resolved using two pass assembler? Can we generate both the passes in a single pass or not? Give your comments with example? Ans. A computer language in which each statement corresponds to one of the binary instructions recognized by the CPU. Assembly-language programs are translated into machine code by an assembler. Assembly languages are more cumbersome to use than regular (or high-level) programming languages, but they are much easier to use than pure machine languages, which require that all instructions be written in binary code. Complete computer programs are seldom written in assembly language. Instead, assembly language is used for short procedures that must run as fast as possible or must do special things to the computer hardware. For example, Figure 17 shows a short routine that takes a number, checks whether it is in the range 97 to 122 inclusive, and subtracts 32 if so, otherwise leaving the number unchanged. (That particular subtraction happens to convert all lowercase ASCII codes to their uppercase equivalents.) This particular assembly language is for the Intel 8086 family of processors (which includes all PC-compatible computers); assembly languages for other processors are different. Everything after the semicolon in each line is a comment, ignored by the computer. Two lines (PROC and ENDP) are pseudo instructions; they tell the assembler how the program is organized. All the other lines translate directly into binary codes for the CPU. Many of the most common operations in computer programming are hard to implement in assembly language. For example, there are no assembly language statements to open a file, print a number, or compute a square root. For these functions the programmer must write complicated routines from scratch, use services provided by the operating system, or call routines in a previously written library. There are two types of assemblers based on how many passes through the source are needed to produce the executable program. One-pass assemblers go through the source code once and assumes that all symbols will be defined before any instruction that references them. Two-pass assemblers (and multi-pass assemblers) create a table with all unresolved symbols in the first pass, then use the 2nd pass to resolve these addresses. The advantage in one-pass assemblers is speed which is not as important as it once was with advances in computer speed and capabilities. The advantage of the two-pass assembler is that symbols can be defined anywhere in the program source. As a result, the program can be defined in a more logical and meaningful way. This makes two-pass assembler programs easier to read and maintain. More sophisticated high-level assemblers provide language abstractions such as: Advanced control structures High-level procedure/function declarations and invocations High-level abstract data types, including structures/records, unions, classes, and sets Sophisticated macro processing Object-Oriented features such as encapsulation, polymorphism, inheritance, interfaces The translation performed by an assembler is essentially a collection of substitutions: machine operation code for mnemonic machine address for symbolic machine encoding of a number for its character representation, etc. Except for one factor these substitutions could all be performed in one sequential pass over source text.The factor is the forward reference(reference to an instruction which has not yet been scanned by assembler). Now its that the separate passes of two pass assemblers are required to handle forward references without restriction. Now if we impose certain restriction that means handling forward references without making two passes. These different sets of restrictions lead to one pass assembler. And these one-pass assembler are particularly attractive when secondary storage is either slow or missing entirely, as on many small machines.

Tuesday, December 24, 2019

Illusions in J.D. Salinger´s Catcher in the Rye Essays

Do not be mislead by what you see around you, or be influenced by what you see. You live a world which is a playground of illusion, full of false paths, false values and false ideals. But you are not part of that world (Sai Baba). A world of illusion is an alluring, yet perilous place to enter. It can deceive the mind only to cause damage and distress. Holden Caulfields life has led to. an atrophy through his struggle of conceiving illusions as reality. In J.D. Salingers novel, The Catcher in the Rye, Holden Caulfield battles the constant reminder of his brother, Allies, death while he roams the streets of New York. Preceding his futile adventures, he is expelled from his fourth school, Pencey Prep. During his extent at†¦show more content†¦By hiding from this adolescent problem, Holden only reprieves the situation. As Holden aspires to be the catcher in the rye, he includes his red hunting hat. This is a people shooting hat...I shoot people in this hat (Salinger 22). Holden becoming the catcher is arguably the biggest illusion in this novel. This quote is taken in a metaphorical sense as to his armor or protection when he saves people in the rye from any maturation. When he puts on this hunting hat, he instantly feels the prerogative to become the catcher. He continues to reach for this duty as the catcher in the rye, but this prevails over his intuition and common sense on indispensable issues. In the climax when Holden watches Pheobe on the carrousel, he puts on his hat as a protection from the non-precedent rain. My hunting hat gave my quite a lot of protection, in a way, but i got soaked anyway. I didnt care though. I felt so damn happy all of a sudden (Salinger 212-213). Throughout this novel, Holden had perceived this hunting hat as a full protection and a barrier in between him and the rest of the world. In this scene, Holden finally understands that this hat cannot protect him forever. It is known that this hat had also be en a representation of Allie because he had very red hair (Salinger 38), and now Holden is apprehending that Allie cannot protect him from everything. While Holden knew he had the hat on and it was offering the protection it could,Show MoreRelatedGreat Gatsby in Comparison to Catcher in the Rye Essay1666 Words   |  7 Pages‘American dream’ which can be compared easily to The Catcher in the Rye By J.D Salinger. Nick and Jay Gatsby are similar to Holden Caulfield. Nick is like Holden in the fact that they both share ideas of having expectations of people and hope, even though society constantly lets them down with multiple examples showing how people act in their natural state. Gatsby and Holden are much alike because they both have these fond ideas of women and their illusion of their American dreams, with Holden its JaneRead MoreAnalysis Of The Book The Catcher Of The Rye 1080 Words   |  5 Pagesonly one present. In the book The Catcher in the Rye by J.D. Saling er and the movie Igby Goes Down by Burr Steers hypocrisy, self-Isolation and the deception of adult-hood are themes that re-illiterate the coming of age for young-adults like Holden Caulfield and Igby Slocumb. At times, we as people forget the standards we claim set upon ourselves to embrace yet forgetting to act upon it. This entitlement of hypocrisy carries out commonly amongst The Catcher in The Rye and Igby Goes Down. It is portrayedRead MoreThe The Rye : The Expression Of Individuality1061 Words   |  5 PagesRahul Gudivada EWA2 Literary Analysis 11/9/15 The Catcher in the Rye: The Expression of Individuality In the bildungsroman Catcher in the Rye, J.D. Salinger employs the struggle of individuality, inevitable maturation, and the childhood corruption of adulthood to reveal Holden’s alienation from society. Throughout the novel Holden is rejected and exploited by the society around him. As he is conflicted with himself to find a purpose in life he constantly tries to connect with a superficial societyRead MoreEmotional Damage, Hidden Truths, and Accepting Responsibility in J.D. Salinger’s The Catcher in the Rye 1996 Words   |  8 PagesEmotional Damage, Hidden Truths, and Accepting Responsibility in J.D. Salinger’s The Catcher in the Rye When one finds themselves in a reader’s position, they search for things in the novel that they can relate to. J. D. Salinger wrote a story that contained countless topics that people, past, present and future, can relate to in several ways. The novel follows the story of a troubled boy named Holden who leaves school due to his poor academic performance, an altercation with his roommate, and complicationsRead MoreHolden s Journey Toward Maturity2555 Words   |  11 PagesAdditionally, Holden is constantly looking for answers to where the ducks go when not at the lagoon. For instance, â€Å"Do you happen to know where they go, the ducks when it gets all frozen over?† (Salinger 60). Holden’s concern for where the ducks go proves his anxiety and Holden feels he lacks anywhere safe to head to go in the world. Holden shows a growing ability to adapt to adult life. He also says people cannot rely on others to help them and sometimes people just have to do things with out theRead MoreThe Modernist Movement And Its Influence On Art1688 Words   |  7 PagesMovement.  Post-Modernism was a departure from modernism.  This movement took place during the mid-twentieth century.  One characteristic during the post-modern movement was that there was no absolute truth.  Postmodernists believed that truth is an illusion misused by people to gain power over other people.  The postmodern movement is identified with deconstruction and cultural criticism.  Cultural criticism questions the notions of high and low cultures and tends to treat all works of art as equallyRead MoreA Negative View Of Mental Illness1781 Words   |  8 Pages The Catcher in the Rye, makes a connection to these views of the world. In the novel The Catcher in the Rye, written by J.D. Salinger, the main character Holden Caulfield is clearly disturbed in some way or another. The opening paragraph begins to paint a clear picture of Holden’s unique and descriptive personality, â€Å"I’ll just tell you about this madman stuff that happened to me around last Christmas just before I got pretty run-down and had to come out here and take it easy† (Salinger 1). TheRead MoreThe Great Gatsby By F. Scott Fitzgerald3044 Words   |  13 Pageswriters present the idea that the American Dream is all an illusion and that it is physically impossible to achieve yet many strive to reach it in their lifetime. Many define the American Dream as the notation that the American social, economic and political system is the key to a life of personal happiness and material comfort. The central theme of both ‘The Great Gatsby’, by F. Scott Fitzgerald, and ‘The Catcher in the Rye’, by J.D Salinger, is American lifestyle and mind-set during a time of prosperityRead MoreAccepting Realities : The Catcher Of The Rye2547 Words   |  11 PagesAccepting Realities: the Catcher in the Rye By: Shirelle Cogan â€Å"Reality is merely an illusion, albeit a very present one† –Albert Einstein. This quote by one of the most impactful men in the world emphasizes that although reality is not set in stone and changes constantly, it is an unavoidable part of life. This means that if someone refuses to accept their realities, it is due to issues within them that are unresolved. Holden, the protagonist in The Catcher in the Rye by J.D Salinger, has extreme difficultyRead MoreLiterary Criticism : The Free Encyclopedia 7351 Words   |  30 Pagesnovel is sometimes used interchangeably with Bildungsroman, but its use is usually wider and less technical. The birth of the Bildungsroman is normally dated to the publication of Wilhelm Meister s Apprenticeship by Johann Wolfgang Goethe in 1795–96,[8] or, sometimes, to Christoph Martin Wieland s Geschichte des Agathon of 1767.[9] Although the Bildungsroman arose in Germany, it has had extensive influence first in Europe and later throughout the world. Thomas Carlyle translated Goethe’s novel

Monday, December 16, 2019

Phil 235 Paternalism Essay Free Essays

string(120) " neglect the fact that there is an obvious difference in the degree of knowledge between the patient and the physician\." Paternalism in the Medical Profession Philosophy 235 EC: Biomedical Ethics â€Å"The only appropriate and realistic model of the Dr.? patient relationship is paternalism. Doctors are the medical experts; most patients have little, if any, reliable medical knowledge; implicit trust in one’s physician is essential to the healing process; and doctors have the responsibility for our health and therefore have the duty to make all the important medical decisions. We will write a custom essay sample on Phil 235 Paternalism Essay or any similar topic only for you Order Now † Critically assess that claim. The issue of doctor patient relationships has become more and more prevalent in our world today. It is hard to draw a clear line in deciding what the appropriate roles are of both the patient and the medical professional. The claim that the paternalistic model is the appropriate and most realistic model will be argued in this paper. This model states that the doctor is the one in complete control, making all decisions on behalf of the patient, and the patient grants the doctor this responsibility, obeying any orders. In this model, patients act as children, who are ignorant and unknowledgeable, and doctors act as parents, not only guiding the child in the right direction, but also, actually telling them what to do. Should doctors really hold complete responsibility for our health? Should they be the ones to make all the important medical decisions without patients having any say? This model will be argued in this paper in order to critically assess whether it should be dominant in our present society. â€Å"The traditional view held by physicians themselves was that the physician is the captain of the ship, and that the patient has to follow orders. † This view has only been strongly believed since the 19th and 20th century, when medical professionals were granted almost complete control over all decision making by their patients. Before that time, going to see a doctor was perceived as a last resort, and many would ignore their doctor’s advice altogether. Over time, this view has shifted and society began to believe that physicians â€Å"knew best, and therefore had not only the right but also the duty to make the decision. † Today, less and less citizens are continuing to agree with this point of view, and instead other doctor patient relationship models have emerged and been identified by Robert Veatch: the engineering model, the priestly model, the collegial, and the contractual model. The three alternative models to the priestly (paternalistic) model have emerged from a more contemporary perspective. The engineering model states that the relationship between the two parties would be nothing more than the doctor simply presenting the patient with the diagnosis, prognosis, and treatment options. Any decision as to which route to take is left entirely up to the patient. As the textbook explains, the doctor is nothing more than an â€Å"applied scientist†, or a â€Å"plumber without any moral integrity†, since ethics and values do not come into play in this relationship. Although I do not entirely agree with this model, the responsibility is lifted off of the physician, and the patient is given freedom to decide. This would follow the argument of self-determination, as said by Dr. Ornstein. This is the belief that all people who are competent should be the ones in control of determining their own fate. Society has not always believed or relied on medical professionals. In fact â€Å"until well into the nineteenth century, the physician was seen as a figure of last resort. † They were deemed useless and even harmful. With this in mind, I wonder why in our day and age, we would rely even more on physicians than we did in the past? Today, we have the privilege of finding out almost anything we need to know within minutes via the Internet, and that is why sometimes, it is the patient that knows more than his own doctor. It is important that patients assume some level of responsibility for their own health, instead of relying on doctors, and the engineering model would display that type of behavior. That is another reason why I oppose the claim that paternalism is the ideal relationship between doctor and patient. Another alternative model identified by Robert Veatch, is the collegial model. This theory emphasizes that both parties are connected through common goals and interests, and that each acts as an independent equal. This model would suggest that the parties work together, and therefore the responsibility is divided equally amongst the patient and physician. There is collaboration here, engaging in activities, which are satisfying to both, and demonstrating an adult-adult relationship, because no one party has greater control over the other. This model goes hand in hand with the partnership model, which expresses that health care professionals and their patients act as partners or colleagues in the pursuit of the shared value of health. There is mutual participation in this model, which demonstrates that, unlike the paternalistic model, the patient can help come to a medical decision. This model stresses, â€Å"the patient uses expert help to realize his ends. † This expert help can come in many forms, and as I have previously mentioned, today society is exposed to numerous modes of gathering any type of information that is of interest. It is of course obvious that the physician has a stronger medical background and is more competent in that field, but that does not diminish the participation or contribution of the patient. With that being said, it is my opinion that the paternalistic model has clearly outgrown our culture, when there are models such as the partnership or collegial model, which are more in sync with our world today. Finally, the third alternative to the paternalistic model is the contractual model. This model is similar to paternalism, in that it questions the assumptions of equality, however it differs in that there is a â€Å"contract† between both parties, leaving each with their own dignity and moral authority. What is crucial about this model is that it does not neglect the fact that there is an obvious difference in the degree of knowledge between the patient and the physician. You read "Phil 235 Paternalism Essay" in category "Essay examples" Instead of focusing on that discrepancy, the model concentrates on the agreement between the two parties to exchange goods and services and the enforcement of that by government sanctions. In other words, this model compromises between partnership and the reality of medical care, and according to Veatch, is the only realistic way to share all responsibility, while protecting various parties in health care. For example, both parties are freely entering this contract, and therefore are both given the right to leave it, given proper notice. However, while partaking in the contract, there are duties and obligations of each, which may neglect virtues of benevolence, care and compassion, which we do see stressed in other models. Leaving aside the three alternatives to the paternalistic model, there are several other arguments, which come to surface, when critically assessing the above-mentioned claim. The first is that doctors must act like parents because patients know much less than doctors do. This emphasizes the idea that the doctor patient relationship should be one of paternalism. This argument takes into account two different prototypes. The first is the parent-infant relationship, where the parent is the doctor, taking on an active role and the infant is the patient, taking on a passive role. In this case the patient is extremely dependent on the medical professional. The second is the parent-adolescent child relationship, where the physician guides the patient in the right direction, and the patient co-operates to the degree of obeying. Both suggest that the patient has no responsibility, and that the duty and obligation of all decisions rest on the shoulders of the physician. This proposes that patients are ignorant and unknowledgeable and given the opportunity to make their own decision, they would not be able to. It is likely that doctors know more than the average member of society, however, this is not to say that they are infallible, mistakes can happen. As Professor Ornstein has stated, we cannot choose our fathers, but we can choose our doctors, and in my opinion there is no connection where the two should be related. If a patient feels they should seek out a second, third or fourth opinion, that is their own right. Unlike the ability to seek out a second, third or fourth father. We do not have this option. It is possible and even probable that doctors will differ in their views, and each may guide their patient down a different path. Although a relationship between a physician and a patient should be based on a degree of trust and loyalty, if there is any sort of uncertainty, patients should not feel the pressure of following a path they do not believe in. Getting another opinion is not disloyal or disrespectful; it is a patient’s right. Additionally, today more patients recognize that it is unfair for doctors to take complete responsibility for our welfare, as we are exposed to so much free medical information. It is my opinion that it is the patient’s duty to also act responsible for his or her own welfare. Another argument that I have come across to oppose this claim is that doctors may be experts in medical matters but there may be other factors to take into account, such as ethical issues, when making a decision. Each doctor has taken an oath, to save lives. This is their main concern, and their main goal for each patient. One must wonder, whether or not this is always ethical. As Professor Ornstein has suggested, do we save someone who as a result must live the rest of his or her life in agonizing pain? Or do we relieve them of that pain, and simply allow them to pass away? This is an ethical issue where many doctors may have opposing points of view, and may decide that their job would be to save the patient. That would be a paternalistic instinct however; medical decisions should not be purely medical all the time. There are always other factors to consider such as the medical conditions of the patient, their preferences, the quality of life and the socio economic conditions. Each, of course, is given a weight dependent on the specifics and circumstances of the case. In the case of a patient who is experiencing excruciating pain, the doctor may come to the conclusion that the best option would be to remedy that pain with medication. It is important to note, that this paternalistic act is ignoring all ethical issues and only taking medicine into account. Opposing this notion would be to consider how this medication might cloud the patient’s judgment, or recognizing the patient’s preferences prior to formulating a medical decision. Doctors might be experts in medical matters, but the other factors, which are necessary to take into consideration, deem the paternalistic view inadequate. A final argument against the paternalistic view is that physician-patient interactions are negotiations. Viewing the interactions as negotiations, is in itself opposing paternalism because the patient is given some level of autonomy to take part in the decision making process. The goal is to reach a mutual agreement. In order to do so, there are certain steps that must be followed. Firstly, the negotiation should involve adequate disclosures by both parties. This is necessary, so that values and objectives are clear, and a fair negotiation can take place. Secondly, the negotiation should be voluntary, meaning uncoerced. Neither party should feel threatened while entering into the negotiation process. And finally, the solution should be one of mutual acceptance. Of course there are occasions where negotiation is not possible, and that would be for example in the case of an emergency, when the physician needs to save the patient without negotiating beforehand. In that case, the medical professional may act in a paternalistic way, however if there is a competent patient, negotiation is possible and can often be characterized in terms of any of the above-mentioned models (parent-child, friends, partners, etc. ). The aspect that the relationship is seen as a negotiation counters the paternalistic view, in that the patient is given choice. If the patient chooses to give up his autonomy, and lay his destiny in the hands of his physician, that is his preference, unlike the paternalistic model, where that is not a choice, but the only way. The paternalistic model is not the only realistic relationship between doctor and patient. â€Å"As a normative model, paternalism tends to concentrate on care rather than respect, patients’ needs rather than their rights, and physicians’ discretion rather than patients’ autonomy or self determination. As I have mentioned previously, there are many other factors that must always be taken into consideration when dealing with a patient. Autonomy, self-determination, and respect, are surely incredibly important when dealing with a patient, and paternalism ignores those factors. The above-mentioned arguments, and alternative relationship model s, clearly oppose the claim that paternalism is the only appropriate relationship. As I had asked the questions: Should doctors really hold complete responsibility for our health? Should they be the ones to make all the important medical decisions without patients having any say? I believe the answer to both questions is quite clear, that the responsibility should be shared, and the patient, if capable, should take part in the decision making process. That being said, paternalism is not the most appropriate model and no one relationship trumps another. Instead, all must be taken into account depending on circumstance. How to cite Phil 235 Paternalism Essay, Essay examples

Sunday, December 8, 2019

Diversity Affects Communication free essay sample

Diversity Affects Communication Victoria Moore University of Phoenix Introduction to Communication COM/100 Bruce Turner January 03, 2010 Diversity Affects Communication 1. What is cultural diversity? Why is an appreciation of diversity important in communication? Cultural diversity is having a group made up of people from various ethnicities, backgrounds, religions, etc. Having an appreciation of diversity is extremely important and essential for effective communication. For example, if you are a person that makes a lot of gestures to express yourself but you are attempting to communicate with someone who comes from a culture that does not utilize gestures; you would want to take this into consideration so that you do not come across as rude or offensive to the person that you are communicating with. 2. How would you describe your cultural background? How do your values affect the way you communicate with others and the way they communicate with you? Culture can be defined as â€Å"system of shared ideas and meanings, explicit and implicit, which a people use to interpret the world and which serve to pattern their behavior’. We will write a custom essay sample on Diversity Affects Communication or any similar topic specifically for you Do Not WasteYour Time HIRE WRITER Only 13.90 / page That being said my cultural background is very diverse. I am a young, African-American woman. An immediate assumption would be that I am liberal in both actions and ideas; however, that is false. I actually tend to have very conservative view points since I am also moderately religious. My values and my upbringing affect almost every aspect of my everyday life including the way that I communicate with other. The values that have been instilled in me is what tells me to be respectful to others and in return demand respect for myself. For example, I was in my very early twenties when I was a manager at the hotel that I work at. Most of the people that I managed were older than I was; however, they understood that I could be very personable and relate to them on a social level but I also could handle business. 3. How do gender differences affect communication? As I get older, and hopefully wiser, I learn more about how gender affects communication. I have learned how to communicate with the opposite sex by learning what they are influenced/hindered by. For example, I know that the worst time to communicate with my boyfriend is when he is watching any type of sporting event. I have learned that during this time anything I say goes in one ear and out the other. I have also learned that like the saying goes â€Å"You attract more flies with honey than with sugar. What this means in terms of communication with the opposite sex is that sometimes you can flash a nice smile to get something you want. Also, the ways that men and women communicate are very different. Usually men are more straightforward and to the point whereas women tend to elaborate more and tend to also be more emotional when communicating. 4. Describe three barriers to bridging differences. What strategies may you use to overcome them? Three barriers that were mentioned in the reading are: 1) Stereotyping and prejudice: These are opinions that are formed and are most likely wrong about a group of people. This can be overcome by simply not judging a book by its cover. We should not rush to form opinions or imply certain things about people based on their ethnicity, religion, sexual orientation, etc. 2) Assuming superiority: As the text mentioned, this is one of reasons that our country has terrorists attacks aimed at us. Our country is hated by numerous others because we infer that we are the best, the richest, the most powerful. This can be overcome by communicating more with other countries and not just imposing our will on them. 3) Assuming similarity: This is an assumption that other people think and act the way that you do. This can be overcome by taking other peoples thoughts and ideas into consideration. 5. Identify three advantages and three challenges of working in a multicultural team. Three advantages of working in a multicultural team could be: 1) Unique viewpoints: Someone may have a new and unique way of looking at something or resolving an issue. 2) Opportunity to learn: Sometimes we get so stuck in our own box that we forget that there are other rich cultures out there that we could learn about. ) Opportunity for understanding: We may be able to understand more about a person or culture by understanding what they have gone through. Three challenges of working in a multicultural team could be: 1) Communication barriers: There may be a language or some other type of communication barrier that could be a hindrance to a team. 2) Inability to see past differences: Sometimes there may be so much cultural tension that you may not be willing to work through them. ) Different ways of working: Some cul tures may have a mentality of getting things done as soon as possible while others may believe in taking their time. 6. Does diversity affect ethical decisions? Explain your answer. I think that diversity does affect ethical decisions because different cultures are going to have different ways of looking at ethical issues. For example, the issue of abortion may be more accepted in culture as opposed to another, therefore people from those cultures are going to have two very different opinions about it.