Saturday, August 31, 2019

Perfect Competition Essay

For a market to be perfectly competitive, one of the main criteria is that all firms (and consumers) are price takers. The following conditions are also necessary: 1. There must be many buyers and sellers in the market for an identical product. 2. Firms’ products are identical. 3. Buyers and sellers must be fully informed about prices, products, and technology. 4. There are no barriers to entry (or exit). 5. Selling firms are profit-maximizing entrepreneurial firms. The scenario about the ice cream industry depicts a perfectly competitive market. Buyers view vanilla ice cream from different stores as identical products, new stores can enter the industry, and each store has no influence on the going market price. In perfect competition, many firms sell identical products to many buyers. Therefore, if Falero charges even slightly more for a box than other firms charge, it will lose all its customers because every other firm in the industry is offering a lower price. In other words, one of Falero’s boxes is a perfect substitute for boxes from the factory next door or from any other factory. So, a perfectly competitive firm faces a perfectly elastic demand for its output at the current market price. In this case, the equilibrium market price is $5 per box, so Falero faces a perfectly elastic demand curve for its boxes at $5. Since a perfectly competitive firm faces a perfectly elastic demand curve at the market price, it can sell any quantity it chooses at this price. Therefore, the change in total revenue that results from a one-unit increase in the quantity sold is equal to the market price, so the marginal revenue curve is a horizontal line at the market price of $5 per box. Since the demand curve is also a horizontal line at the market price, the demand curve and the marginal revenue curve are the same. Economic profit equals total revenue minus total cost, so profit is at its maximum when the difference between total revenue and total cost is at its greatest Economic profit is defined as the difference between total cost and total revenue. At a price of $12,000, a profit-maximizing firm in a perfectly competitive market will produce 4,000 hybrid vehicles per year, since this is the quantity where marginal cost equals the market price (which equals a competitive firm’s marginal revenue). Since profit is the difference between total revenue (TR) and total cost (TC), we can rewrite this expression as: Profit = TR – TC Profit = (P x Q) – (ATC x Q) Profit = (P – ATC) x Q In this case, profit = ($12,000 per vehicle – $16,000 per vehicle) x 4,000 per vehicle= -$4,000 x 4,000 = -$16,000,000, which is an economic loss. This is the blue shaded area (labeled A) in the graph above. The firm will produce as long as the market price is above the shutdown price of 10 cents, so the firm’s supply curve corresponds to the portion of the marginal cost curve for prices above 10 cents. For example, at 10 cents, the firm will produce 150,000 pairs of socks, so (150, 10) is a point on the firm’s supply curve; at 15 cents, the firm will produce 200,000 pairs of socks, so (200, 15) is another point. For prices below 10 cents, the firm will not produce at all. The shutdown price of $2 marks the point at which average variable cost is at its minimum. In the short run, when price is below $2, a firm’s variable costs exceed its total revenue, so the firm would maximize profits (minimize losses) by shutting down. The break-even price of $4 marks the point at which average total cost is at its minimum. In the long run, when price is below $4, a firm’s total costs exceed its total revenue, so the firm would maximize profits (minimize losses) by exiting the market. In the short run, the individual supply curve for a firm is the portion of the marginal cost curve that corresponds to prices greater than and equal to the shutdown price of $2. In perfect competition, the market supply curve is just the horizontal sum of all the firms’ marginal cost curves. At prices below $2, firms will not produce in the short run. At $2, firms will produce a total of 3 yo-yos per firm x 100 firms = 300 yo-yos. Therefore, (300, 2) is a point on the short-run industry supply curve. Similarly, at $3, firms will produce a total of 4 yo-yos per firm x 100 firms = 400 yo-yos. Therefore, (400, 3) is another point on the short-run industry supply curve. Use similar calculations to plot the rest of the market supply curve. The market price of $3 corresponds to a point on the MC curve that is between the firm’s ATC and AVC. Therefore, in the short run, although the firm cannot cover all its fixed costs, it will generate enough revenue to cover all its variable costs. The firm will ignore the fixed costs and produce in the short run. In the long run, the firm will shut down and exit the industry, since $3 is below the break-even (long-run exit) price. Because the firm can never cover its fixed costs, and the business runs at a loss, it is profit maximizing to exit the market.| | A firm’s short-run decision is not solely based on whether or not it incurs profits or losses. It depends on whether the market price is below or above its shutdown price, or minimum average variable cost. As long as the market price is above average variable cost, a firm will produce in the short run since it is covering its variable cost. In cases where there are fixed costs and price is equal to or just above the shutdown price, this will mean that the average total cost is higher than the market price, which leads to losses. However, in the short run, a firm’s decision to produce is independent of any fixed costs, so even if it cannot cover fixed costs and earn profits, it will produce nonetheless. If the price exceeds the marginal cost of increasing output by one unit, the firm will produce another unit. It keeps increasing its output until it reaches a point where increasing output by one more unit has a marginal cost that is greater than marginal revenue (in this case, the going market price). In this example, the marginal cost of increasing output from five to six units is less than the market price. The marginal cost of increasing output from six to seven units is greater than the market price. So, the firm stops at six units. This is its profit-maximizing quantity. The table below summarizes the firm’s marginal cost. The firm considers its minimum variable cost in its short-run production decisions. It will produce in the short run if the market price is equal to or greater than its minimum average variable cost. That is, as long as it can cover its variable costs, it will produce in the short run. The firm considers its minimum average total cost in its long-run production decisions. It will produce in the long run if the market price is equal to or greater than its minimum average total cost; that is, as long as the firm at least breaks even in its economic profits. The table below summarizes the firm’s average variable cost, which equals average total cost since there is no fixed cost The initial long-run equilibrium was at the intersection of the initial industry short-run supply and demand curves (S100 and D1) at coordinates (4,000, 65). After the change in consumer preferences, the long-run equilibrium is at the intersection of the new industry short-run supply and demand curves (S70 and D2) at coordinates (2,000, 60). The long-run industry supply curve will pass through these long-run equilibrium points, so you should have placed each of the black points (X symbols) at these coordinates. Notice that this industry is an increasing-cost industry. That is, an increase in demand increases factor prices. Firms stop entering the market and expanding production at a higher equilibrium market price because the price at which zero profit is made has risen. Therefore, the long-run supply curve is upward sloping. In the long run, firms in a perfectly competitive market enter and exit the market without barriers, and they make zero economic profit. The reasoning goes as follows: if firms make economic profits, new firms will enter the market, shifting the market supply curve to the right until the market price has fallen enough such that no firm is earning economic profit and there is no longer incentive to enter. If firms are incurring economic losses, firms will exit the market, the market supply curve will shift to the left, and the market price will rise until firms make zero economic profit. So, in the long run, firms are operating at the â€Å"break-even† point, or the minimum of the short-run average total cost curve AND the long-run average total cost curve.

Friday, August 30, 2019

Life without social media Essay

The 21st century is an era greatly influenced by â€Å"reality television†. If we’re not trying to keep up with the Kardashians, we’re watching Big Brother, Bachelors/Bachelorette, and Flavor of Love. This is a contrast from the 20th century, which was the era of the silver screen, the era of cinema. Rather than having little to no imagination like television today the films of this time era pushed the boundaries of our imagination and fulfilled and captured our wildest dreams. Two of the greatest movies of this time were A Trip to the Moon, directed by Georges Mà ©lià ¨s (1902) and The Great Train Robbery directed by Edwin Porter (1903). In his movie A Trip to the Moon, Georges Mà ©lià ¨s is an early example of narrative film, his introduction film editing and help distinguish narrative films and how they were seen in comparison of music, books, and theatre. Although his edits were simple, for example people disappearing win a cloud of smoke, meaning he would make smoke build in front of the actors, stop filming the scene, move the actors out of the frame, and start recording again thus making the audience believe that the actors instantly disappeared in front of their eyes. This brought a new dimension into film, and introduced film editing to the world. He shot his films at 14 frames per second, his shots always remained stationary but what made these scenes amazing were his amazing sets designs, hand painted backgrounds and his in camera effects, Really took an audience who were alive before the first manned moon landing in 1969 to a world of pure science fiction and imagination. Taking what Mà ©lià ¨s introduced into narrative movies and running  with it, Edwin Porter being the father of the â€Å"narrative†, introduced at this time what was considered state of the art filmmaking technologies that help further film narrative. In The Great Train Robbery, Porter introduced several Film Technologies such as cross cutting, double exposure, movement of the camera, tracking and panning, out of sequence shots, and colourizing of people and actions. These edits and special effects were very effective at drawing the audience into the movie, special effects let the audience know when guns were shot, how joyous the people were when they were dancing it effectively brought the audience into that world. He also introduced a different film method which was location shooting, unlike Mà ©lià ¨s who’s camera always remained stationary, and were shot on sets, Porter wasn’t stationary it moved with the actors, and his set wasn’t a set at all, is was outside, it was in the train, it was were ever the story took them. This took film narrative to a new level; it brought the audience on the journey, something film lacked before Mà ©lià ¨s and Porter. Something that both their films had that films before them didn’t have, was a story. Before them films did not have any structure or a linear storyline meaning they didn’t have they didn’t have start leading to a climax leading to the end. Their films were significantly longer the films before them Mà ©lià ¨s’s film being 10 minutes and Porter’s film being 12 minutes. The result of their films telling a story helped them reach their goal and what they wanted their audience to get from the film and that was the story. A Trip to the Moon a film that follows a group of very intelligent astronomers as the hatch an intricate plan to travel moon.(Westminster, 2010) While The Great Train Robbery is a story about the 4 bandits who tie up and assault a worker at the train station sneak on the train, steal all the passengers’ money and shoots at them as the make a get away. A child finds the worker at the train station tied up tells the sheriff and they go on a hunt to get the bandits. To compare these two films and say which one was better effective reaching it goal then the other, is hard, and practically impossible. They both told their stories, but if it weren’t for Mà ©lià ¨s introduction of film editing many of the effects that were used in Porters film wouldn’t have happened. A Trip to the Moon was the first science fiction film; the first of it’s kind ever. It was extremely popular and helped the cinema market  transition into narrative films. Not to take away from Porter, The Great Train Robbery took what Mà ©lià ¨s did and took to a whole complete level and help solidify narrative films spot in the cinema market. Comparing these films is like comparing the IPhone to the IPhone 5, of course the IPhone 5 is better and more effective at doing it’s job then the original IPhone but without the original IPhone there would be no IPhone 5. Works Cited Westminster. (2010, Novemeber 12). A Trip to the Moon . (N. Montano, Editor) Retrieved September 13, 2013, from Film110: https://film110.pbworks.com/w/page/12610142/A%20Trip%20to%20the%20Moon

Nursing Practice and Profession Abstract

AbstractNurses committed to the interpersonal caring hold themselves accountable for the human well being of patients entrusted to their health care. Being accountable means being attentive and responsive to the health care needs of individual patient. It means that my concern for the patient transcends whatever happens during my shift, and that I ensure continuity of care when I leave the patient. In today’s highly fragmented system of care, patients often find themselves unable to point to any one care giver who knows the overall situation and is capable and willing to coordinate the efforts of the healthcare team. Being responsive and responsible earns a patient’s trust that â€Å"all will be well† as the healthcare needs are addressed. This will be the central them of this paper in the quest of establish the nurse’s accountabilities in evaluating or implementing change.Nurses who are sensitive to the legal dimensions of practice are careful to develop a strong sense of both ethical and legal accountability. Competent practice is a nurses’ best legal safeguard. When working to develop ethical and legal accountabilities, nurses must recognize that both deficiencies and or excesses of responsible caring are problematic.Although it is reasonable to hold oneself accountable for promoting the human well being of the patients, nurses can err by setting unrealistic standards of responsiveness and responsibility for themselves. Prudence is always necessary to balance responsible self care with care for others. Inexperienced nurses might feel totally responsible for effecting patient outcomes beyond their control and become frustrated and sad when unable to produce the desired outcome Conversations about what is reasonable to hold ourselves and others accountable for are always helpful.Each employing institution or agency providing nursing service has an obligation to establish a process for reporting and handling practices by indiv idual or by health care systems that jeopardizes a patient’s health or safety. The American Nurses Association code of Ethics obligates nurses to report professional conduct that is incompetent, unethical or illegal. For nurses, incompetent practice in measured by nursing standards, unethical practice is evaluated in light of the professional codes of ethics, while illegal practice is identified in terms of violation of federal legislations and laws.Nurses must respect the accountability and responsibility inherent in their roles.   They have the moral obligations in the provision of nursing care, hence they collaborate with other health care providers in providing comprehensive health care, recognizing the perspective and expertise of each member.   Nurses have a moral right to refuse to participate in procedures that may violate their own personal moral conscience since they are entitled to conscientious objection. They must keep all information obtained in a profession al capacity confidential and employ professional judgment in sharing this information on a need to know basis. Nurses are expected to protect individuals under their care against lack of privacy by confining their verbal communications only to appropriate personnel; settings, and to professional purposes. They are obliged to adhere to practice that limits access to personal records to appropriate personnel.They must value the promotion of a social as well as economic environment that supports and sustains health and well-being. It includes the involvement in the detection of ill effects of the environment on the health of the patient as well as the ill effects of human activities to the natural environment. They must acknowledge that the social environment in which the patient inhabits has an impact on health. Nurses must respect the rights of individuals to make informed choices in relation to their care. They have this responsibility to inform individuals about the care available to them, and the choice to accept or reject that care.   If the person is not able to speak for themselves, nurses must ensure the availability of someone to represent them. It is vital to respect the decisions made concerning the individual’s care.Standards of care are one measure of quality.   Quality nursing care provides care by qualified individuals. Likewise, the individual needs, values, and culture of the patient relative to the provision of nursing care is important to be respected and considered hence it should not be compromised for reasons of ethnicity, gender, spiritual values, disability, age, economic, social or health status, or any other grounds.   Respect for an individual’s needs includes recognition of the individual’s place in a family and the community. It is due to this reason that others should be included in the provision of care, most significantly the family members. Respect for needs, beliefs and values includes culturally sensi tive care, and the need for comfort, dignity, privacy and alleviation of pain and anxiety as much as possible.â€Å"Evidence-based practice (EBP) is a problem solving approach to clinical practice that integrates the conscientious use of best evidence in combination with a clinician’s expertise as well as patient preferences and values to make decisions about the type of care that is provided† (Melnyk, 2004). Quality of care outcomes refers to accuracy and relevance demonstrated by the decisions concerning the need for medical and surgical intervention. Evidence of appropriateness in healthcare is necessary to improve health outcomes, balance costs, provide guidance to physicians and meet the need of the new informed health consumer. Appropriateness is unlike effectiveness since the later refers to the degree in which an intervention achieves the objectives set (Muir Gray, 1997). One criterion of appropriateness is that of necessity.As technology and improved methods o f care has advanced, access to appropriate interventions should likewise improve. Today some interventions are still limited such as magnetic resonance imaging (MRI) in rural communities and since access to this technology is limited, a criterion of necessity is used to determine who is able to access and how quickly. Therefore although use of MRI may be appropriate in diagnostics, it may be underused. Advancements in technology, interventions and clinical research will provide updated evidence which in turn would affect ratings of appropriateness (Muir Gray, 1997). Clinical guideline statements are developed from evidence to assist healthcare practitioners in making appropriate health interventions (Woolf, Grol, Hutchinson, Eccles & Grimshaw, 1999).The clinical guideline may be a general statement or concise instruction on which diagnostic test to order or how best to treat a specific condition. The purpose of clinical guidelines is as a tool for making decisions that will result i n more consistent and efficient care. Guidelines are not rules nor are they mandatory. The benefits of clinical guidelines include: Improved health outcomes; Increased beneficial/appropriate care; Consistency of care; Improved patient information; Ability to positively influence policy; Provide direction to health care practitioners;ReferencesAgency of Healthcare Research and Quality. (n.d.). Outcomes research fact sheet. [Online].Available: https://www.ahrq.gov/professionals/clinicians-providers/guidelines-recommendations/index.htmlBrook, R.H. (1994). Appropriateness: The next frontier. [Online]. Available:http://www.bmj.com/content/308/6923/218.full?ijkey=t7GNbMJu0NIhAFitch, K., Bernstien, S. J., Aguilar, M. D., Burand, B., LaCalle, J. R., Lazaro, P. van het Loo,McDonnell, J., Vader, J. P., & Kahan, J. P. (2001). The RAND/UCLA appropriatenessmethod user’s manual. [Online]. Available:http://www.rand.org/pubs/monograph_reports/MR1269.html?John A. Hartford Foundation. (n.d.). [Online]. Available: http://www.johnahartford.org/Muir Gray, J.A. (1997). Evidence-based healthcare: How to make health policy and managementdecisions, New York: Churchill Livingstone.Woolf, S. H., Grol, R., Hutchinson, A., Eccles, M., & Grimshaw, J. (1999). Clinical guidelines:Potential benefits, limitations and harms of clinical guidelines. [Online]. Available:http://www.bmj.com/content/318/7182/527.full

Thursday, August 29, 2019

How effective are samsung and apple attracting customers Assignment

How effective are samsung and apple attracting customers - Assignment Example In this age of cutthroat competition, a business greatly relies on its existing customers to sustain and survive. The purpose of a business is to fulfill the demands of consumers and in turn, the customers make it possible for the business houses to achieve its aims and objectives. Customers are therefore adjudged as the most important stakeholders of a business. A business does not operate in the vacuum and therefore it is impossible for an organization to function without having any customer. Hence, it is obvious that customers are the resources upon which the sustainability and success of a firm depend on. A number of scholars have argued that to acquire and maintain a competitive edge over the rivals, organizations should try developing long term relationship with the existing customers. However, the same line it is also imperative that a company should attract new customers to increase the volume of business. Thus, the existing customers are the backbone of a company and the new customers take the company forward and help in achieving new heights (Cherunilam, 2010). Attracting new customers has always been a challenging task for the companies and those succeeded in achieving it have flourished. This piece of study is meant for analysing the significance of customer attraction in the welfare of a firm. In addition, the strategies adopted by firms in achieving high rate of customer attraction will be also evaluated. However, for this study Samsung and Apple have been chosen to analyse the effectiveness of their strategies in attracting new customers. Research Objectives The research objectives are as follows: - To identify the significance of attracting new customers for a business. To assess the effectiveness of Apple and Samsung’s strategy of attracting new consumers. Research Questions On the basis of the objectives of the research, the following are the research questions: - Q1. What is the importance of attracting new customers for a business? Q2 . What is the level of effectiveness of the strategies pursued by Apple and Samsung in attracting new consumers? Limitations of the Study A study is always coupled by certain limitations and it is characterized by the factors that impact a study negatively. The following are the limitations for this study: - The study will involve assessment of Samsung and Apple’s strategy only and hence drawing a generalized picture is not possible. Due to limitation of resources and time, the study will not be able to carry out certain activities that might have increased the overall credibility of the report. CHAPTER II - Literature Review The literature review of a study holds a vital position in a research paper. It encompasses an in-depth discussion of the published information about the topic of concern. In addition, the literature review of the study will also take into consideration the theories stated by the authors previous and an evaluation will be made about its effectiveness in the present context. For example, an author might have portrayed a certain theory which was relevant for that particular era, but the same theory might not be

Wednesday, August 28, 2019

Analysis of Carphone Warehouse Financial Statements Essay

Analysis of Carphone Warehouse Financial Statements - Essay Example mobile services into one single media delivery chain facilitated by the expansion of high speed wireless internet and broadband services worldwide (Pricewaterhousecoopers, 2006). This new economic era changed many industries since it pushed the demand for products for high tech product such as mobile telephones up since mobile technology became a part of the new integrated supply chain of high tech products. In the United Kingdom a major player in the communication services industry is The Carphone Warehouse Group. The emphasis of this report is to analyze the financial position, performance and prospects of this firm as well as providing a brief analysis of the accounting standards that Carphone Warehouse utilizes in its presentation its financial reports. Carphone Warehouse is the biggest player in the European market in the independent retailing of mobile phones and related services with over 2,200 retail stores across 11 countries (Carphone, 2007). The United Kingdom company established its operation in 1989 whose founder Charles Jones envision a firm that had the potential of dominating the European market with the implementation of a organic growth model. The organic growth business model is a business or economic philosophy that emphasis on steady constant growth over a prolonged period of time (Moore, 2006). An example of a booming economy that utilized this model to reach constant 10% economic growth for multiple decades is the Chinese economy. Carphone Warehouse transformed from a small mobile retailer 18 years ago, to a multinational giant in the telecommunication industry that obtain annual sales of 3.99 billion pounds in 2006 (Annual Report: Carphone, 2006). The company’s strategic focus includes a continued market share growth, value added services, building lifetime customer relationship and increasing productivity and profitability (Carphone, 2007). The mobile services industry generated global revenues of approximately 3.32 trillion dollars

Tuesday, August 27, 2019

Traditional Management Systems VS. CRM And SCM Essay

Traditional Management Systems VS. CRM And SCM - Essay Example The paper shall first compare traditional management with CRM in the first section then secondly, it shall compare traditional management with SCM. In conclusion, it is evident that both customer relationship management systems and supply chain management systems create increased efficiency, achieve more costs savings and generate greater profits for organizations that implement them. Customer Relationship Management (CRM) is a strategy used to learn more about customers’ needs and behaviors in order to solidify their loyalty to the business’s offerings (Wailgum & Patton, 2011; â€Å"What Is CRM?,† 2010). This strategy enables businesses effectively utilize their resources to increase their knowledge of the behavior and value of their target customers. This increased insight enables businesses to identify and target the best customers, customization and/or personalize their products and services, track customer contacts, add cross-sell and upsell opportunities, reduce costs and increased overall profitability. CRM may mean different things to different industries but, ultimately, its purpose is to help organizations derive competitive advantages that will sustain their long-term profitability. This section shall differentiate traditional management systems from CRM in terms of differences in approaches, achieving efficiencies, costs savings, a nd firm profitability. Differences in approaches (empowering customers, becoming a trusted partner) A good example that could be used in comparing traditional management systems with CRM is the marketing function of an organization. Under traditional management systems the marketing was product-based and company focused. Management was more concerned with how much control they have on the message conveyed to the customer. In these cases the company was the active participant in the marketing process whereas the consumer was inactive or passive. In contrast under CRM, customers are empowered for example Dell customers can configure their computers prior to ordering through Dell’s website. CRM enables companies to ensure that only those products or services that consumers want are produced. This alters organizations’ marketing strategies from the traditional push to pull strategies. Furthermore, the increased consumer participation that is encouraged by CRM enables organ izations to understand their customer requirements better. This makes organizations that have CRM become more trusted partners than those firms that are stuck on traditional management systems. Achieving efficiencies CRM management systems are generally supported by information and technology (IT) solutions that are designed for the unification of customer information (Kumar, 2011). Where these solutions are well integrated with other business systems in an organization and /or with partner organizations, the company can be able to centralize all its customer information in a few IT applications. This means that senior management can easily be presented

Monday, August 26, 2019

Law of Tort, Fundamentals of Business Law Essay

Law of Tort, Fundamentals of Business Law - Essay Example In the fair child's case, the industrial employers had a duty of care towards the employees in ensuring that:- A man purchased a bottle of ginger-beer from a shop to be used by his girlfriend. But the bottle in which the beer was contained was opaque and it was impossible to clearly see its contents. It was later found that the beer contained remains of a snail on pouring out the beer. The girlfriend got sick and sued the manufacturer for damages in tort. It was held that the defendant was liable since he owed her a duty of care to ensure that the bottle should contain any other objects apart from the beer itself. Under the tort of negligence, the plaintiff cannot successfully sue the defendant unless he/she proofs that he was injured. Even if damage is evident, the plaintiff must also proof that he suffered injury directly attributable to the damage. The plaintiff might suffer injury not directly attributable to the damage. If this is the case, his action will fail. 2 In Fairchild's case, the workers inhaled excessive asbestos and contracted mesotheliona, a cancer associated with the inhalation of such substances. The injury here is this disease suffered. This would give the worker an automatic a way to sue their employers for damages. The situation would have changed had the workers not contracted the disease or any injury of a similar nature. The employers know very well that excessive inhalation of the asbestos would cause the disease but they did not take reasonable steps to avid it. Standard of Care Apart from just the duty of care that one owes his neighbour in his actions that he ought to have him in contemplation, there are cases where standard of care need to be shown. The courts have the burden to proof whether the defendants had standard of care. A standard of care is thus expected from an ordinary prudent person in a given situation. If a person has placed himself or made others believe that he can execute a given task, then he owes his clients a standard of care to do such a task without harming the client. A doctor for instant in a reputable hospital owes a patient a standard of care and should carry out his work expected of a doctor from such a hospital and expect to be given such a standard of care. 3 Causation of Damage under Negligence The general rule under negligence is that the burden of proofing negligence would lie on the plaintiff. But in case of accidents in the workplace, the plaintiff need not proof negligence if that accident could not have occurred were the defendant not negligent. In such cases, the plaintiff relies on the principle of 'Res Ipsa Loquitor" i.e.; let the facts speak by themselves. The burden of proof then shift to the defendant. The defendant would then convince the courts that the accident would still have occurred without his own negligence. 4 If the defendant successfully argues that he was not negligent or convinces the world

Sunday, August 25, 2019

Judaism Research Paper Example | Topics and Well Written Essays - 1250 words - 1

Judaism - Research Paper Example Even though evidence cannot be provided for the existence of the one or may Supreme beings, there is evidence for the power of religion. There are numerous religions across the globe. However, the most distinct one are Islam, Judaism, Christianity, Buddhism, and Hinduism. These religions have symbols, narratives, and sacred histories whose purpose is to explain the meaning and origin of life. In the same way, from their beliefs about human nature, the people may derive ethics, morality, and religious laws. The religions have the clergy, organised behaviours, Holy Scriptures, holy places, and a definition of what makes up adherence. The practice of religion may also include things like commemoration, feats, festivals, prayer, sacrifices, sermons, and rituals. Besides that, they also have myths, funerary services, and other aspects of the human culture. Drawing on a variety of sources, the paper will address Judaism history and the present practice. It has been evidenced that Judaism is among the oldest religions on earth that exist until today. Its history, traditions, and beliefs are recorded in the Hebrew Bible. Judaism is a religious tradition that dates back to about 4,000 years ago, and is rooted in the eastern region of Canaan. Canaan is the biblical name of the region between River Jordan and the Mediterranean. This is the equivalent of the current Palestinian and Israel territories (Shahak, 1994). This was during the Bronze Age in the Middle East. Even though the Jewish calendar dates back to more than five thousand years ago, various scholars argue that the commencement of the Judaism faith is linked to the Israelites and their forefather Abraham. This is estimated to be around 164 B.C.E. The beliefs and practices of the classical Judaism did not emerge until the 1st century (Schachter-Shalomi & Segel, 2013). In this regard, Judaism

Saturday, August 24, 2019

Dutchtown High School Research Paper Example | Topics and Well Written Essays - 2000 words

Dutchtown High School - Research Paper Example There are a number of classes that are offered at Dutchtown middle school even though all the students are encouraged to take the entire four major lessons, which are science social studies, math, and English. The school also provides an opportunity to get into an in-state university. The schools motto is to provide an environment of excellence where all students can become lifelong learners and productive citizens. In addition, the school also offers courses in United States history, studio art 2d-design, biology, computer science, art history, calculus Ab & Bc, chemistry, English language and composition , politics, studio art drawing, US government, English literature, world history, European history and physics. The state of Louisiana has an enrollment of 703,309 students while the attendance rate for the year 2010-2011 was 94.8% (U.S. Census Bureau, 2012). However, the in-school suspension rate for the year 2010-2011 was 9.6% while the percentage of classes taught by highly qual ified teachers for the same year was 88.8%. The expenditure per student was $10,622. The state ranked in a grade of 91.8 in 2010, which is a 2 star ranking and 93.3 in 2011, which is a grade C ranking (U.S. Census Bureau, 2012). ... n.  However, the area where the school is located is vulnerable to hurricanes and the tropical systems since the area is lowly elevated, and has a close location to the coast of southeast Louisiana. The town lies 23 feet above the sea level, which is slightly lower than most areas of United States (U.S. Census Bureau, 2012). Geismar in turn is located in Louisiana, a state found in the US, which is found in the southern region. The state of Louisiana is among the most extensive and populous states of the United States. This has contributed greatly in shaping the demographics patterns of Ducthtown High School in that the school boasts of a large population of students and teachers. The average ratio of students to teacher is 25:1, which is actually a moderate ratio. The capitals of Louisiana are found in Baton Rouge whereas the largest city in that state is New Orleans. One interesting political fact is that the state of New Orleans is the only state in the US whereby the three are political subdivisions known as parishes. Dutchtown High School is located in Ascension Parish. These parishes are the equivalents of counties in many other US states (U.S. Census Bureau, 2012). Ethnically, the state of Louisiana’s urban environments is a multi-cultural environment and multi-lingual surroundings. The states heritage is by far shaped and influenced by the admixture of 18th century,  Native American, African,  and Spanish cultures. Cultures and ethnicity are considered exceptional in the US statistics, which have also played a major role in shaping the schools ethnicity and cultural heritage. This multi cultures were contributed largely by the colonial history of this state whereby the French and the Spanish colonized it way before the British took over colonizing America.

Friday, August 23, 2019

What precisely is HRM and what evidence is there to support the Essay - 2

What precisely is HRM and what evidence is there to support the contention that it is linked in some way to improvements in a firms performance - Essay Example This field of management looks into the most effective utilisation of the employees to achieve organisational and individual goals. An important feature of this human resource management is that it is people oriented. It looks into the welfare of the employees and evolve the best arrangement for the employee and the organisation such that the organisational goals are met. It is a line responsibility. The human resource managers not only gives advice to their department regarding the issues relating to human resources but also to other departments. HRM is common to all organisations. It is not only a feature of the industrial world but it is hugely relevant in the fields of service, sports organisations, religious organisation, social organisations etc. Since HRM is mainly focussed on the issues relating to the people and due to the varying nature of the people the job of the human resource managers becomes a challenging task. It is a development oriented integrated approach. It not only tries to attain the individual goals of the employees but also the organisation and the society as a whole. (Randhawa, 2007) HRM is the integral part of the management activity. The objective of the HRM is to see the effectiveness and the efficiency of the organisation. This they can do by helping the organisation to reach its goals, employ the skills and ability of the work force efficiently, to provide the organisation with well trained employees, look after the employee’s job contentment and self actualisation. The most important is the communication with the employees so that the employees are aware of the various policies undertaken by the management. (Randhawa, 2007) There are several formal

Thursday, August 22, 2019

Valuation methods and approaches tehcniques of tesco plc Essay

Valuation methods and approaches tehcniques of tesco plc - Essay Example Tesco plc is a super market firm in the London Stock Exchange, and it is very essential to evaluate the financial approaches and techniques of the entity. Capital budgeting of an entity means planning for capital assets. The decision about capital budgeting helps to determine whether or not the money should be invested in long term projects. As far as the Research and Development projects of Tesco plc is taken in to consideration, for the purpose of taking the better decision, the fundamental project evaluation techniques like Pay back period, ARR (Accounting or Average Rate of Return), NPV (Net Present Value), or IRR (Internal Rate of Return) is applicable. From this fact it is clear that under the present value method, the present value of all cash in flows is compared against the present value of all cash outflows. The difference between the present value of cash inflows and out flows are known as the net present value. The discount rate for obtaining the present value is some desired rate of return which may be equal to the cost of capital of a company. In addition to this, it is necessary to consider about the concepts like book value, market value, present value, price earning ratio etc. for analyzing the financial viability of Tesco plc. There is a great difference between both concepts like book value as well as market value in case of accounting point of view. The weights to be used can be either book value weights or market value weights. Book value weights are easier to calculate and can be applied consistently. Market value weights are supposed to be superior to book value weights as component costs are opportunity costs and market values reflect economic values. However these weights fluctuate frequently and fluctuations are wide in nature. Both the concept of book value and market value is differ, because book value of the firm or book value weights remains constant, but the market value weights are goes on fluctuating. While computing WACC,

The Types of Fallacies Essay Example for Free

The Types of Fallacies Essay * â€Å"Argument† from pity: when feeling sorry for someone drives us to a position on an unrelated matter * We have a job that needs doing; Helen can barely support her starving children and needs work desperately. But does Helen have the skills we need? We may not care if she does; and if we don’t, nobody can fault us for hiring her out of compassion. But feeling sorry for Helen may lead us to misjudge her skills or overestimate her abilities, and that is a mistake in reasoning. * â€Å"Argument† from envy: When we find fault with a person because of envy * â€Å"Well, he may have a lot of money but he certainly has bad manners† would be an example of this if it is envy that prompts us to criticize him. Apple Polishing: Pride can lead us to exaggerate our own accomplishments and abilities and lead to our making other irrelevant judgments * Moore recently sat on a jury in a criminal case involving alleged prostitution and pandering at a strip club; the defendant’s attorney told the members of the jury it would take â€Å"an unusually discerning jury† to see that the law, despite its wording, wasn’t really intended to apply to someone like his client. Ultimately the jury members did find with the defense, but let us hope it wasn’t because the attorney flattered their ability to discern things. Guilt trip: Eliciting feelings of guilt to get others to do or not do something, or to accept the view that they should or should not do it * â€Å"How could you not invite Trixie to your wedding? She would never do that to you and you know she must be very hurt. † The remark is intended to make someone feel sorry for Trixie, but even more fundamentally it is supposed to induce a sense of guilt. * Wishful thinking: when we accept or urge acceptance (or rejection) of a claim simply because it would be pleasant (or unpleasant) if it were true. Some people, for example, may believe in God simply on the basis of wishful thinking or desire for an afterlife. A smoker may refuse to acknowledge the health hazards of smoking. We’ve had students who are in denial about the consequences of cutting classes. * Peer pressure â€Å"argument†: A desire for acceptance can motivate us to accept a claim not because of its merits, but because we will gain someone’s approval (or will avoid having approval withdrawn). Group think: when one substitutes pride of membership in a group for reason and deliberation in arriving at a position on an issue; and let’s include the fallacy in our list of the top ten fallacies of all time, because it is exceedingly common. * involves one’s sense of group identification, which people experience when they are part of a group—a team, a club, a school, a gang, a state, a nation, the Elks, Wal-Mart, the U. S. A. Mauritius, you name it. * Nationalism (a form of â€Å"GROUP THINK†): a powerful and fierce emotion that can lead to blind endorsement of a country’s policies and practices. (â€Å"My country right or wrong† explicitly discourages critical thinking and encourages blind patriotism. ) Nationalism is also invoked to reject, condemn, or silence criticism of one’s country as unpatriotic or treasonable (and may or may not involve an element of peer pressure). If a letter writer expresses a criticism of America on the opinion page of your local newspaper on Monday, you can bet that by the end of the week there will be a response dismissing the criticism with the â€Å"argument† that if so-and-so doesn’t like it here, he or she ought to move to Russia (or Cuba or Afghanistan or Iraq). * Rationalizing: when we use a false pretext to satisfy our own desires or interests * Let’s say Mr. Smith decides to do something really nice for his wife on her birthday and buys her a new table saw. â€Å"This saw wasn’t cheap,† he tells her. But you’re going to be glad we have it, because it will keep me out in the garage and out of your way when you’re working here in the house. † * â€Å"Argument† from popularity: when we urge someone to accept a claim (or fall prey to someone’s doing it to us) simply on the grounds that all or most or some substantial number of people (other than authorities or experts, of course) believe it * â€Å"Argument† from common practice: trying to justify or defend an action or practice (as distinguished from an assertion or claim) on the grounds that it is common. â€Å"I shouldn’t get a speeding ticket because everyone drives over the limit† would be an example. â€Å"Everyone cheats on their taxes, so I don’t see why I shouldn’t† would be another. * â€Å"Argument† from tradition: People do things because that’s the way things have always been done, and they believe things because that’s what people have always believed * The fact that it’s a tradition among most American children to believe in Santa Claus, for instance, doesn’t prove Santa Claus exists; and the fact it’s also a tradition for most American parents to deceive their kids about Santa Claus doesn’t necessarily mean it is okay for them to do so.

Wednesday, August 21, 2019

Quantitative investigation of immunoglobulins

Quantitative investigation of immunoglobulins Introduction- Quantitative investigation of Immunoglobulins is the basis of the standard laboratory technique within the field of clinical immunology. Immunoglobulins can be measured quantitatively through the use of nephelometry, such measurements are vital in the instance of a suspected immunodeficiency within a patient. For this reason the test is accurate and rapidly measures the amounts of IgM, IgG and IgA proteins within the blood of the patient and from such determine if a number of conditions or disorders are present. The role of such antibodies is within fighting infections and allergies as part of the normal immune response. A disease (or disorder) can be identified through the measurement of such protein levels. IgM for example can appear during an initial infection and then reappear to a lesser extent upon secondary exposure. (Weir, 1978) Nephelometry is usually performed by drawing blood from a vein on the back of the hand or if not possible the inside of an elbow. The needle draws the blood into an airtight vial or tube attached to it. Removal of the needle is followed by sterilisation and covering of the incision site. (Stanley, 2002) Practical Schedule- Nephelometry is an automated system that measures antigen or antibody solution in very limited concentrations by the amount of light scatter. The principle is that when the light comes into contact with the solution it will not be absorbed but scatter away from the main beam and measured at angles between 0-90 from a predefined curve. The subsequent amounts are proportionate to that of the concentration of molecules. As well as dilute solutions there also needs to be a linear correlation between molecules formed and optical density. For this reason several dilutions measurements are recorded and also during the formation of molecules. This process is known as rate nephelometry. When considering this technique it is vital that the relative amount of antigen and antibody must be small enough so that precipitation does not occur but also large enough to allow the formulation of small immune complexes. Immunoprecipitation results are achieved through the use of monoclonal antibodies (MCAs) allowing epitopes to react with the antiserum and MCAs forming immune precipitates with their antigens. Results- Normal results IgG: 560 to 1800 mg/dL IgM: 45 to 250 mg/dL IgA: 100 to 400 mg/dL Evaluation- the automated nature of this technique means that it is both fast accurate with results available within 1-2 hours. Its wide spread use is mainly down to this factor but its simplicity and low sample size and volumes also make it a valued technique with the clinical laboratory setting. (Diamandis et al1996). It is however, as with most techniques not without its draw backs. The usual precautions should be taken as when taking any blood sample. Although rare excessive bleeding, fainting and infection should all be considered as risk factors when taking samples. (Drexel, 14/06/08) The presence of dust particles and other debris can be cause for distorted readings and lead to higher values than expected. This can be addressed through centrifugation of the specimen (Diamandis et al1996). In addition to this air bubbles can cause similar effects on results. To ensure readings are as accurate as possible, the specificity should be at optimum level, as set on the nephelometer and controls should be carried out wherever it is used. (Palmer, 1992) Although this method does determine the amount of each immunoglobulin it does not possess the ability to identify antibodies. Another method that can be used to quantitatively investigate Immunoglobulins in serum, saliva, cerebrospinal fluid (CSF), amniotic fluid, and gastrointestinal juice is Radial Immunodiffusion (Chapel et al1999). This technique allows for the adding of a sample to a well in a gel made up of the antibody specific for the substance being tested for. This then moves through the gel leading to the formation of a visible precipitate around the optimum concentration sample well. The interpretation of such results however is subjective and results are delayed as the process takes several days and as such nephelometry is recommended for greater precision, automation, objectivity and speed and is suitable for large throughput tests (Keogan et al 2006). It is also hard to quantitatively analyse the results using very small samples and a calibration curve. (Chapel et al., 2006). 2. Quantitative Other Serum Proteins-Radial Immunodiffusion Introduction Radial Immunodiffusion acts upon the antigen- antibody complex precipitation reaction. It is used within the field of neurology and oncology. This involves passive diffusion of immunoreactants through an agar matrix. An electrical current is not required for such process to occur as is reliant upon the physiochemical relationships. Practical Schedule Radial Immunodiffusion works via the mixing of antiserum with agar and pouring it on a glass plate to allow it to solidify. The antiserum must be specific for the class of immunoglobulin that is being measured for the technique. The agar mixture is then punctured and the subsequent holes filled using the sera from the test samples. Diffusion of the Immunoglobulins radially causes the formation of precipitate at the point which the number of antibody and antigen are identical. As with previous techniques a calibration curve us used that has been made up from known set of solution to determine the amount of Immunoglobulins present within the sample. Evaluation Accuracy and specificity is the most potent threat to validly of this technique. The fundamental problem is the lack of sensitivity and is not a rapid technique with results taking over 48 hours owing to reaction times. (Chapel, 2002) Whilst it does provide use in the determination of serum proteins quantitatively there are an array of factors that can lead to unreliable results. Temperature of the gel and external environment, molecular size, gel viscosity, reactant concentration and buffer pH highlights a few of the factors that will affect the rate of diffusion but is not exhaustive. (Nakamura et al, 1979) 3. Quantitative other Serum proteins Collection of serum Collect blood in a glass container and allow it to clot at room temperature for an hour. Once the clot has formed loosen the walls of the container to aid retraction. Transfer to 4 degrees and leave overnight if necessary Collect the expressed serum and centrifuge at 150g for 5 minutes to sediment the erythrocytes, and then at 350g for 15 minutes. Transfer the straw coloured serum to suitable containers and heat at 56 degrees for 30 minutes to destroy the heat labile components of complement. (Hay et al., 2002) Qualitative Immunoglobulins Introduction- The stages of diagnosis, determination of immunity and the susceptibility of an individual to many microbial infections, are based upon immunological tests in serum. When blood clots the fluid that remains is known as serum and as such it is rich in Immunoglobulins. Serum however is not easily accessible so other sample sources can be used. The presence of specific Immunoglobulins in urine, saliva and cerebrospinal fluid mean that such bodily fluids, inclusive of others such as semen, can be used instead. Anatomically the most readily available and less intrusive however, as with other bodily fluids, contain low concentrations of IgM and IgG. Semen is abundant with these Immunoglobulins and as such may be perceived as being the most accurate and reliable in any such investigations (PCT, 1987) In Serum- Immunoelectrophresis Introduction- Serum Protein Electrophoresis is a qualitative investigation carried out to test for the presence of monoclonal bands (paraproteins). (Chapel et al., 2002) During electrophoresis, discrete monoclonal bands may appear (M bands). Further investigation is needed in order to determine the immunoglobulin heavy and/ or light chains through immunofixation. This is important when trying to distinguish what sort of Immunoglobulins are present. Determination is achieved through Immunoprecipitation in a gel with anti- sera that is specific for heavy and light chains of the immunoglobulin. Immunoelectrophresis works by separating sera in agarose gel by electrophoresis. Troughs that are parallel to the unfixed electrophoretic strips have specific anti- sera added to them leading to the formulation of precipitin arcs that are clearly visible owing to the process of diffusion. Immunofixation however tends to be more commonly used and as such will be more focused upon within the portfolio. This technique is commonly used in the diagnosis of conditions such as osteoporosis. In the abnormal absence of a heavy chain and an abnormal reaction occurring with the ant- sera that are specific for light chains discrete (M) bands are present. It is also a highlight to the possibility of an IgD or IgE paraproteins although is far less common. If an abnormal reaction occurs with only the heavy chain anti sera it is indicative of a rare heavy chain disorder. It is possibly to quantify individual M bands with the use of a densitometer. This acts by measuring the intensity of the stain taken up by each individual band and as such is the only method at present to be of use in the measurement of paraproteins concentration (Chapel et al1999). Practical Schedule taken from Clinical Immunology. (Chapel et al., 2002) Immunoelectrophoresis- Apply serum samples to an electrophoresis gel at the cathode end alongside a normal serum sample as a control. Apply an electric current for 45 minutes and remove the gel. Use a stain to visualize the bands. Immunofixation-specific antisera to IgG, IgA, IgM and kappa and lambda light chains are then applied to the electrophoresed samples by soaking strips of cellulose acetate in the individual antisera and laying them on the electrophoresis gel. This is then incubated for 2 hours and all the un-fixed proteins are washed off leaving the precipate. Individual monoclonal bands can be quantitatively measured by a densitometer. Results The dark areas indicate monoclonal bands. The picture above shows a positive result for the lambda chain. The presence of monoclonal bands can indicate multiple myelomas or osteoporosis. In this example, the M band is identified as IgG of kappa type. Concentration of the M band is determined using a densitometric trace, as demonstrated in the second image. Evaluation- The presence of air bubbles will distort the formation of protein bands and as such the gel must be degassed. The method detailed above is much quicker and far more sensitive than the singular use of immunoelectrophresis. Its cheapness and low hazard level mean it is a desirable technique in the detection of Immunoglobulins within serum. (Zola et al. 1999) Qualitative Immunoglobulins in Urine- Electrophoresis and Immunofixation Normal physiology of the kidney dictates that protein is usually excreted within the urine in minimal amounts. Higher levels can lead to the suspicion of multiple myeloma that can lead to irrapairable damage to the kidneys as nephritic cells are non replaceable with chronic lymphocytic leukaemia and hypogammaglobulninaemia being suspects. Kidney disorders such as IgA nephropathy may also be a causation of such symptoms. All humans produced excessive amounts of free polyclonal light chains in accompaniment to normal immunoglobulin synthesis with these being secreted into the urine and are detectable in low amounts in all samples (Thompson, 1981). If the normal range of this is exceeded however it is indicative of renal damage. This method is often used in order to detect these small free monoclonal light chains that are also called Bence- Jones Proteins owing to the fact that normal parameters of testing fail to pick them up. (Chapel, 2005). Bence- Jones Proteins are distinguishable by the fact they possess unusual thermal properties, for example they precipitate out of the urine solution at 56 degrees and redissolve upon further heating. (Thompson, 1978) Practical Schedule Determine concentration by ultrafiltration, absorption of water, or by freeze-drying. There are several commercially available kits for determining the concentration of urine. This involves concentrating the urine, then using electrophoresis to determine the presence of monoclonal bands. Then using immunofixation to establish what the monoclonal band is made of. (Chapel et al., 2006) Results Serum protein samples from patients with light chain multiple myeloma and one normal result on the far left.The M protein is seen as a dark dense band localised on the strip, this picture shows the different bands that can be detected. Albumin Decreased with malnutrition and malabsorption, pregnancy, kidney disease (especially nephrotic syndrome), liver disease, inflammatory conditions, and protein-losing syndromes Increased with dehydration Alpha1 globulin Decreased in congenital emphysaema (a1-antitrypsin deficiency, a rare genetic disease) or severe liver disease Increased in acute or chronic inflammatory diseases Alpha2 globulin Decreased with hyperthyroidism or severe liver disease, haemolysis (red blood cell breakage) Increased with kidney disease (nephrotic syndrome), acute or chronic inflammatory disease Beta globulin Decreased with malnutrition, cirrhosis Increased with hypercholesterolaemia, iron deficiency anaemia, some cases of multiple myeloma or MGUS Gamma globulin Decreased variety of genetic immune disorders, and in secondary immune deficiency Increased Polyclonal: chronic inflammatory disease, rheumatoid arthritis, systemic lupus erythematosus, cirrhosis, chronic liver disease, acute and chronic infection, recent immunization. Monoclonal: Waldenstroms macroglobulinaemia, multiple myeloma, monoclonal gammopathies of undetermined significance. (MGUS) Table from lab tests UK online. Evaluation-this method allows the determination of the different proteins in the urine and can be vital in allowing the doctor to work out a diagnosis of the condition. It is relatively simple and reliable however the results can only be read by a skilled worker and owing to its various steps is not as rapid as desired. Results show that different diagnoses are reached depending on which Immunoglobulins are increased in the urine, as indicated in the table above. Qualitative Immunoglobulins in Cerebrospinal Fluid- immunoperoxidase and isoelectric focusing This test allows for the differentiation between IgG and albumin concentrations. This relationship is important to differentiated as IgG is synthesised by lymphocytes within the brain where as albumin is not and is known as the CSF IgG Index that is indicative of this fact as demonstrates how much IgG within the CSF has been synthesised. (Chapel et al2006). Unlike the before mentioned serum where single discrete (M) bands where formed the locally synthesised IgG is often oligoclonal and subsequently cannot be detected by means of electrophoresis of CSF as isnt concentrated. (Roitt et al.. 2002) The only available method for the detection of oligoclonal bands are isoelectric focusing and immunofixation with enzyme labelled antiserum. Investigation and diagnosis of demyelinating disorders such as Multiple Sclerosis is carried out using such tests. (Richard et al 2002) Practical Schedule- Isoelectric focusing and immunofixation with enzyme labelled antiserums. This involves separating the proteins within a pH gradient and transferring them to nitrocellulose membranes that have previously been immunofixed with IgG antiserum to show the specific bands. This can be compared with controls to determine the new bands. (Richard et al., 2002) Results A positive result is where the oligoclonal IgG bands are not found in serums, but, in Cerebrospinal Fluid. These are shown as dense dark bands on the results below. 5-10% of CSF protein tends to be IgG. If a patient has disseminated sclerosis or sub-acute sclerosing panencephalitis then the proportion of IgG in CSF is over 12%. Evaluation This is a relatively modernised method and is approved for use within a clinical setting. The older isoelectric focusing is no longer recommended as it possesses a higher degree of specific (95%) and sensitivity. In addition it is favourable as only requires low concentrations of serum samples and results are available within 2 hours and mostly work on an automated level. (Richard et al.. 2002) Qualitative Immunoglobulins in Saliva- Complement- components Introduction Complement components are large molecular weight proteins. Activation of these usually results in proteolytic cleavage of the molecule into fragments. (Thompson, 1978) Western blotting is used in combination with gel electrophoresis and ELISA and RIAs are used when a whole saliva sample is collected or when there are saliva fractions [Fabian et al., 2007]. Practical Schedule- Gel filtration is carried out on Sephadex G-200. Serum samples of 1.5ml were applied to and 2.5cm diameter, 40 cm length column containing the Sephadex. This is equilibriated with a buffer containing 0.14M NaCl, 0.006M NaH2PO4 and 0.035M Na2HPO at a pH value of 7.3. Fractions of 2.5ml each are collected at a flow rate of 30ml per hour and the protein content of this effluent is measured as UV transmission at 280m µ in an absorbiometer. Results the results are determined by using these filtrated samples and single radial diffusion, a calibration curve is needed to determine amounts. This is created by using standard solutions. (Rose et al., 1997) Evaluation Occur in large amounts in serum can be measured accurately precipitin reaction in gel. Detecting them as antigens however means it cannot be identified as to whether they are active or not. Collecting specimens for complement assays can be difficult as you are to avoid inducing the complement pathway. Care should be taken to avoid false results caused by this when trying to determine the activation that was caused in vivo. Single radial diffusion can be used to determine quantitatively. This test is rapid reliable and easy to carry out and determine results of. (Rose et al., 1997) Complement-breakdown products Complement-C3- Crossed immunoelectrophoresis Introduction The complement system comprises of proteins (which may be membrane bound or present in plasma) that play an important role is host defences [Stanley, 2002]. The system is involved in destroying certain bacteria and viruses, and is also involved in initiating inflammatory response. Complement is also important for opsonisation of foreign materials, facilitation of phagocytosis by leukocytes, and direct cytotoxic reactions [Gaspari Tyring, 2008]. A determinant of the amount of C3 is crossed immunoelectrophresis and has the advantage of differentiating between inactive and active forms of C3. Deficiencies in C3 can lead to systemic infections including sepsis meningitis, pneumococcal and influenza infections. Method First dimension Prepare a 2% agarose solution in the barbitone buffer containing EDTA (ethylene diamine tetra-acetic acid) Pour 3 ml of agarose solution onto the microscope slide and let set. Cut a 1mm well in the slide removing the agarose and filling with the serum sample for the C3 quantification. Apply a potential difference of approx. 150v for 2 hours. Cut a 5 mm wide longitudinal strip containing the sample. Second dimension Prepare 12ml of an anti-C3 solution in 2% agarose solution at 56 degrees. Place the agarose strip at one end of the square glass plate and cover the whole slide with the agarose containing the anti-C3. Place the plate in the electrophoresis tank making sure it is the right way and electrophorese overnight. Wash and stain the precipitin arcs. This method works by using the electric field to separate the complement components. Results Evaluation- as with many of the before mentioned techniques it requires a skilled technician in order to carry out such a test and can edge on the side of time consuming owing to its numerous steps and incubation periods set out in the methodology. (Hay et al.. 2002) Complement- nephritic factor Complement-nephritic factor Introduction nephritic factor is an autoantibody to activated C3, it breaks down C3 in the alternate pathway by cleaving it into two fragments that are inactive forms (C3d and C3c) of the normal version of C3b. It binds and stabilises the alternative pathway C3 convertase (that is present in all sera) in the presence and absence of serum proteins. The alternative pathway C3 convertase blocks inhibitors from acting on and destroying C3 convertase.. The autoantibody (the C3 nephritic factor) reacts in the complement system not by blocking the enzyme active site but instead, block the site where inhibitors limit the action of and destroy the enzyme. Tests to determine the C3 nephritic factor are performed in patients that possess a C3 concentration that is below normal and is unexplained, with normal C4 levels [SAS Centre, 2009]. This is because the presence of the C3 nephritic factor in a patient means that C3 is continuously broken down and depleted. Low levels C3 can be associated wi th kidney disorders or recurrent infections. (Chapel et al., 2006) Practical Schedule-The practical schedule is similar to the before mentioned. It used samples with the suspected nephritic factor and other normal serum samples. They are incubated together and if the nephritic factor is present, it breaks down the C3 in the normal sample. (Chapel et al., 2006) Results-As expected from the similarity in methodology the results are similar to the detection of C3 in the picture above using crossed immunoelectrophoesis. If only inactive forms are present owing to inactivation from nephritic factor than the result is deemed positive. A negative result is when there is no nephritic meaning that none of the C3 has been inactivated. Evaluation-This method is useful in the detection of nephritic factor only and it is not a very direct test as it is carried out to determine the amount of C3. Complement-functional assay CH50 Introduction complement functional assay are the basis for the diagnosis of complement deficiency disorders. They are divided into subcategories dependent upon their relation to another disease. Primary complement deficiencies are genetic based and secondary refer to those that are acquired. Functional assays play a pivotal role within the assessment of the classical, alternative and terminal pathway of complement activation. The most common haemolytic assay used within the laboratory setting is the CH50 assay as it is both the simplest and easiest to carry out. The functional integrity of the classical complement pathway, C1, C2, C3, C4 is measured using CH50 along with total haemolytic complement. This is achieved by measuring the required quantity of serum in order to cause haemolysis of half the quantity that had been stabilised and sensitised red blood cells (Chapel et al..2006). Classical components become activated to lyses sheep erythrocytes that are coated in rabbit anti- sheep E antibodies (Rose.. 1997) Practical Schedule Add to microtiter wells the sera to be tested along with a buffer in different concentrations. Then add the sheep erythrocytes. Cover and incubate at 37dgrees for 1 hour. Then centrifuge and carry out ELISA to detect results. (Rose et al., 2002) Results Evaluation- The method is generally sensitive and reliable providing the specimen is tested quickly and all reagents are kept on ice. (Chapel et al., 2006) The problem arises in availability as they are not widely available and as such functional assays for complement are limited to laboratories that have the equipment (Gaspan and Tyring 2008). As with many immunological techniques the fundamental threat to validity is improper sample collection, this can occur easily in the onsite environment where it can be left to stand for considerable periods of time at room temperature. (Rose, 1997) 5. Microbial Antigens ELISA Introduction By coupling the antigen to an insoluble adsorbent it is possible to detect human antibodies to specific antigens using this technique. Elevated levels of antibody titre remains a reliable indication to the presence and measurement of an active infection within the diagnostic process. ELISAs provide highly sensitive and precise methods for the estimation of biological parameters, with the added advantage that they can handle large numbers of samples that may then be analysed rapidly and are useful in detection of a range of viruses and bacterial infections inclusive of TB and pneumonia and viral antigens. (Chapel et al..2006) Many types of immunoassays can be used to detect and quantitative both antigens and antibodies, but there are differences in the avidity requirements for the antibodies, the signal strengths of the labels, and the amount of background for each of these types of assays. Antibody capture assays are the most appropriate for measuring the titre of the antisera you have generated. ELISAs by definition exploit the use of an enzyme attached to one of the reagent utilized in the test. Subsequent addition of the relevant enzyme substrates/ chromogens cause a colour change: the results can be read both by eye and quantified using specially designed spectrophotometers. The fact that proteins (including antibodies) and carbohydrates can be passively attached to plastics has been exploited in most applications of ELISA. Since one of the components is attached to a solid phase by passive absorption, subsequent reagents can be added, and after a period of incubation, unreacted material can be simply washed away. Such assays are termed Heterogeneous ELISAs. The plastic surface is known as the solid phase and plastic in the form of 96-well microtiter plates has proved highly practical for the following reasons. A large number (96) of sample wells are available in a highly practical from; Multichannel pipets (4,8,12 channels) designed for use with such plates are available, making reagent handling rapid and simple; Test volumes are small (e.g. 50uL, 100uL); Comparative readings of coloured products can be made by eye or by specially designed multichannel spectrophotometers (96 wells are read in 2-5s) The above facts afford the potential to rapidly handle numerous plates, and hence numerous samples may be examined, e.g., 20 plates/person= 1920 sample points/ person. Attachment of reagents also allows great versatility for ELISA since the various components of assays may be used in different combinations and in different phases to investigate their potential. It is difficult to generalise about the potential performance of the various ELISA systems. There is a wide range of configurations available and probably no two scientific groups attempting to perform the same task by ELISA will use identical configurations. Practical Schedule Dissolve antigen in carbonate-bicarbonate buffer. Add 200  µl to each well of a micro-ELISA plate cover and incubate overnight at 4 degrees. Wash so that unbound antigen is removed and fill with casein to block remaining binding sites. Incubate at room temperature for 1 hour. Add 200  µl of test serum and incubate for 2 hours at room temperature in a humid chamber. Wash the plate three times Prepare the peroxidase-antibody conjugate, mix 100microlitres of casein with 1 ml of serum, 100 µl Tween 20 with 50  µl peroxidase-antibody and stir gently. Add 200 µl to each well and incubate at room temperature for an hour. Wash three times. Prepare the substrate solution and add200  µl substrate to each well. Leave in the dark and allow colour to develop. Stop the reaction by adding 50  µl of sodium fluoride to each of the wells. An ELISA reader can then be used to quantify the colour reaction. General Method from Hay et al., 2002. Results A positive result would be characterised by the reaction that causes colour showing the presence of antibodies to the specific type of bacteria highlighted by a dark band. Evaluation On the whole this method remains largely specific and rapid. The major cause of problem is the scientist(s) involved. The main problem is the lack of close- contact training in the fundamentals of ELISA, so that the scientist has the experience to identify and then solve the problems in the use of reagents. The results yielded cannot have the biological implications assessed without general knowledge of several field of science, e.g., epidemiology, immunochemistry, biochemistry and immunology. This however should not be considered too problematic as the ELISA should be a tool for the investigation of specific problems rather than an end in itself. Whilst it in comparison to immunoflourence in the detection of TB and flow cytomentry it may lack sensitivity, it does remain the cheapest and easiest to carry out. (Rose et al..1997) 6. Autoantibodies It is becoming increasingly evident that the presence of tissue auto antibodies is not in itself pathognomic of disease. Improvements in the technique in the last few years have led to increased sensitivity and detection of weak antibodies in sera which would hitherto have been reported negative. As a large series of patients are tested in an increasing number of laboratories previously held views on the specific clinical association of particular antibodies are being revised and reference to early literature may therefore be misleading. Since antigens and antibodies are defined by their mutual interactions, they can be used to quantify each other. At a practical level in a diagnostic laboratory, the functional tests are labour intensive and therefore expensive, and a compromise is usually sought by using immunochemical assays which measure composite of medium to high affinity antibodies and their abundance. The antibody has become the scientists flexible friend! For example, antibodies raised against hormones, serum proteins, cell constituents, cytokines, or even immunoglobulins themselves, allow these parameters to be measured in immunoassays. Immunoassays form the backbone of tests used in the study

Tuesday, August 20, 2019

Quantity Surveying Estimating Methods Impact

Quantity Surveying Estimating Methods Impact In view of the fact that the profession was being introduced in the country, Quantity Surveyors are construction professionals which being patronized under the Institution of Surveyors, Malaysia and its Board of Quantity Surveyor Malaysia. Advices are given by Quantity Surveyors on aspects of financial and contractual administration (ISM, 2004). ISM (2004) define, the Quantity Surveyor as the experts of capable in cost and management of construction projects and also need to price the Bills of Quantities, negotiating and agreeing schedule of rates. According to Andrew Doyle and Will Hughes (1997), the Quantity Surveying profession is constantly scrutinised, with regular demands for higher accuracy estimating. Besides, Mohammad Barzandeh (2009) defines estimating as the process of calculated guessing by looking into the future costs of a construction project before start work. It happens before construction has started. The Quantity Surveyor is responsible for these estimates which serve to make sure that construction project will have a successful financial outcome. Phuwadol Samphaongoen (2009) states construction cost estimating as a cumbersome process. An accurate estimate takes a long time for the Estimator to complete it. Contractors Estimator has to prepare cost estimates quite often for new projects. According to Skitmore, et al (1990), the aim of construction price estimating is to provide an estimate of the market price for the construction contracts. In the other hand, Holm et al (2005) defines cost estimating as the process of analysing a specific scope of work which predicting the cost of performing the work. Cost estimating also involves collecting, analysing and summarising all available data related to a construction project. Hira N. Ahuja Walter J. Campbell (1998) define a simple definition of an estimate which is a prediction of probable cost. According to Mohammad Barzandeh (2009), estimating is one of the most important functions for a successful construction project. These Estimates also influence the decisions made for budgeting and assist in Clients decisions for selection of the Contractor. 1.2 Problem Statement Cost overrun is a very common phenomenon as most of the construction projects in Malaysia facing this problem. Cost overrun occurs when the final cost or expenditure of the construction project beyond the original estimation cost. Cost overrun occurs in both developing and developed countries. (A.S. Ali S.N. Kamaruzzaman, 2010) Besides, Kai Zhu (2005) emphasises, cost estimation and planning is a very important and fundamental aspect in the construction process, it facilitates effective and efficient control of the construction projects. Despite their importance, often in practice because of time constraints, its requirements are not usually fulfilled which in the long run affect a projects quality, duration and budget. According to Kai Zhu (2005), one of the factors that cause cost overrun in Malaysia construction industry which is the inaccuracy of cost estimation prepared by Quantity Surveyors, the possible consequence of cost overrun is abandon of construction project due to underestimate. On the other hand, overestimated cost could result loss of opportunities by the Client and loss of contract award by the Contractor, both the Client and the Contractor could incur significant losses due to underestimated cost. In addition, Stephen D. Schuette et al (1994) emphasises that inaccurate construction project estimates might have a detrimental effect on all parties involved. Many additional factors which might affect the future events of construction project such as labour productivity, material availability, financial markets, weather, constructability issues, equipment availability, contract types, ethics, quality issues, control system, management ability and others. 1.3 Objectives Objectives: To identify the factors that affecting the accuracy of estimation by Quantity Surveyor during pre-tender and tender stage. To identify the impact of inaccuracy in estimation to the Client and/or the Contractor. To make recommendation for minimise the inaccuracy of estimation during pre-tender and tender stage. 1.4 Rationale of study Keith Potts (2011) states, the estimating process is very important, as it enables construction companies to determine their direct costs and provides a bottom line cost below which it would not be economical for them to carry out the construction work. Leng (2005) also states, cost estimation is one of the most important activities of the entire project duration. An over-estimate could lead to tender not being accepted by the Client and losing potential work. An under-estimate could lead to Contractor losing money. According to Hira N. Ahuja Walter J. Campbell (1998), cost estimate play the major role in the decision-making process which leads from concept to completion of a construction project. Cost estimating has become very important under economic conditions with high inflations and fiscal constraints. As mentioned by Keith Potts (2011), the basic challenges faced by the Contractors Estimator is to estimate the costs of constructing a project schedule for the specific construction activities and after that build the construction project within the estimated cost and schedule. Contractor to build a construction project profitably, the cost estimating and cost control skills are very essential for the Contractor. Kai Zhu (2005) recommends that the factors which affect the accuracy of estimation should be identifying in order to increase the accuracy of estimation. Besides, accurate cost estimation minimise the risk of cost overrun, provides confidence on construction project outcomes to the management and contributes to the strategic management of the organization. According to Zaitoun Shadeed Al-Khaldi (1990), there are many factors that affect the accuracy of construction cost estimating and it should be taken into account in the early stage of an estimate. Some of the factors can increase costs and the possibility of contractual disputes between the various parties involved. 1.5 Research Methodologies As stated by Richard Fellows Anite Liu (2008), there are two major approaches being employed for data collection, which are primary data and secondary data. Primary sources allow the researcher to obtain as close as possible to what actually happened during a historical event or time period. A secondary source is a work which interprets or analyses an historical event or phenomenon. It is generally at least one step removed from the event and is normally based on primary sources. According to Richard Fellows Anite Liu (2008), primary sources can be categorise into qualitative, quantitative or a mixed method research. In qualitative approaches seek to obtain insight and to understand peoples perception. Quantitative approaches tend to relate to positivism and seek to obtain factual data, to study relationship between facts and how such facts and relationships accord with theories and the searching from any research executed previously. Interview will be conducted in order to derive primary data. Besides, Quantitative approach such as questionnaires will be employed and conduct through postal delivery and e-mail. According to Denscombe (2007), secondary data provide the researcher theoretical background and knowledge. Secondary data will be collected by literature review method, which include reading journal, articles, published electronic, thesis or dissertation done by other students, news and books. For this project dissertation, majority of the secondary data is collected through books, articles and journals. Chapter 2: Literature Review 2.1 Factors affecting the accuracy of estimating According to Hira N. Ahuja Walter J. Campbell (1998), accuracy in estimating relies on freedom of avoidable mistakes. Estimates errors may also be attributed to technical errors in calculations or simply to careless blunders. Some ordinary blunders are misplacing a decimal point, failing to include the total of every estimate sheet in the final summary, errors in transferring figures from one sheet to another, simple multiplication or addition mistakes and misreading a number because of unclear handwriting. Any one of these types of errors can lead a significant effect on the accuracy of an estimate. 2.1.1 Construction items 2.1.1.1 Complexity of project Michael kitchens (1996) emphasises that the construction industry has become increasingly complex through the years as a result of improvement and advance in technology, natural evolution and litigation. H. van Meerveld, et al (2009) states that the level of complexity of a construction project is a function of three features which include organisational complexity, resource complexity and/or technical complexity. Cost estimation might influence by organisational complexity, resource complexity and/or technical complexity. As mentioned by Michael kitchens (1996), organisational complexity is the number of people, departments and organisations that are involved. Organisational complexity might lead to a loss of information due to communication becomes more difficult when more people are involved. The information that is lost can sometimes be necessary for acquiring an estimate. Organisational complexity can also indicate that Estimators work simultaneously on the same project. In this case, Estimators have to put more effort into coordinating this simultaneous work. According to H. van Meerveld et al. (2009), resource complexity is the volume of resources involved constantly assessed through the budget of the construction project. Resource complexity means that the overall amount of work needed to estimate increases which also increase the chance of making mistakes or errors. H. van Meerveld et al. (2009) note that technical complexity is the level of innovation involved in the product or the construction project process or novelty of interfaces between different parts of that process or product. Technical complexity means that Estimators will have to make manual adjustment to acquire a more accurate estimate on the particular construction project. In projects that are more complex are subject to a higher chance for the change of design. The Estimator has to re-estimate the complete project or parts of it depending on the sort of design changes. H. van Meerveld et al. (2009) state that in general there are two issues influence the estimating activities on more complex projects, which include a higher demand for coordination and structure to prepare an estimating for the construction project. Besides, if complexity increases estimating will need more effort to acquire an estimate and the probability of making mistakes increases. However, according to A. Ashworth et al., the complexity of modern construction industry and the variety of processes used have limited the availability of reliable feedback of information. In practice, the Estimator will have to use his own standard outputs and couple these with an expectation of future performance. 2.1.1.2 Labour productivity As stated by Donald F. McDonald et al (2004), on construction projects there are numerous circumstances and events that may cause productivity to decline which the Estimator might not anticipated when estimating the construction cost. Estimating labours and equipments costs requires more knowledge of construction techniques and experienced judgment as compare with estimating material cost. As estimating labours and equipment cost has the greatest uncertainty is in predicting the productivity of the labours and equipment that used on the construction project. According to Aiyetan Ayodeji Olatunji (2010), construction productivity is influenced by many factors which including material, equipment, tools, construction methods, management skills in terms of adequacy and accurate application. Donald F. McDonald et al (2004) mention that as a result of poor project management may be caused by the failure to properly schedule and coordinate the work and the Estimator might not expected that event of the poor project management. According to Donald F. McDonald et al (2004), work that is not scheduled properly which might lead to shortage of critical construction equipment or labours and incorrect mix of labour crews may result in decreased productivity of the labour because the labour may not able to work as efficiently as they could. Improperly planned and implemented project initiation procedures might also lead to lost labour productivity. Donald F. McDonald et al (2004) also states that if material, tools or construction equipment are not available to particular labour at the right location and timing, the productivity of the particular labour probably suffer as they may be unable to proceed in a consistent manner. Productivity of the labour might suffer if the wrong tools or improperly sized equipment is provided. In addition, poor site layout design can affect the productivity. In addition, Lee Holm et al. (2005) emphasise that estimating should be vary depend on site conditions, labour size, labour experience and equipment selected for the particular construction project. As lost productivity of labours are not tracked normally or cannot be discerned separately and contemporaneously. 2.1.1.3 Insufficient time David G. Carmichael (2002) emphasises that construction projects involving design time of months or years which request the Contractors to digest the tender documents and submit the tender within a relative short period. The planning, estimating, developing a work method, studies and others are prepared within a short period or an insufficient time is allowed. The Contractors is required an appropriate tender period to develop thorough the tender. David G. Carmichael (2002) states that as insufficient time is given to the Contractor for estimating and pricing the tender in rushing process, the Contactor might miss out any prior consideration. Binnington Copeland Associates (2012) also state that as the failure to allow the Estimator adequate time to carefully consider and pricing for risks which might result in excessively high tender prices where substantial contingency is allowed by the Estimator to cover unexpected situation, as the Estimator had not enough time to deal with it. Besides, according to David G. Carmichael (2002), this presents a challenge to the Estimator who has to prepare and complete several estimates and tenders in a relative short period of time. Tang Wai Kuen, Raymond (2005) emphasises that insufficient time for cost estimating is the prior factors which cause inaccurate cost estimating incur, as construction programmes are very tight and designs are frequently changed. The performance of cost management adversely affects the Estimator to estimates as insufficient time is provided. 2.1.1.4 Inadequate information According to Aiyetan Ayodeji Olatunji (2010), majority of the Contractors Estimator facing problem when tendering for a construction project which the information provided is insufficient. Under this circumstance, the Estimator must make his or her own estimation and assumption on it, if inaccurate estimation has been made, it may lead the Contactor overestimate or underestimate. Overestimate might cause the Contractor unable to award the construction project, underestimate might cause the Contractor unable to gain profit from the particular project or more worst still abandon of work due to insufficient financial to run the construction project. 2.1.1.5 Lack of availability of equipment There are two major circumstances which faced by the Estimator when estimating or pricing the tender. Firstly, the Estimator might require estimating the fluctuation of the price of equipment for relative long of period as the construction period normally last for few years. Secondly, during tendering stage the Estimator might require deciding whether own the equipment or hire the equipment from specialist company if the project require certain equipment which the Contractor does not own the equipment (Zaitoun Shadeed Al-Khaldi, 1990). According to Zaitoun Shadeed Al-Khaldi (1990), the Estimator has to evaluate and select one particular piece of equipment, it is essential to determine its hourly cost very accurately. Some factors have to be considered during this stage, which are number of hours used per day, month and year, severity of job conditions, the way the equipment has been maintained and the demand for equipment owned by the Contractor when it is sold. In addition, the price of the equipment might be increase due to inflation or many mega projects are run concurrently which might cause the unavailability of the equipment. 2.1.1.6 Incomplete drawing and detail design Lee Holm et al. (2005) emphasise that the accuracy of estimate also will depend upon the completeness of the contract documents provided and others. In addition, the factors of incomplete drawing and detail design as the factors which cause inaccuracy of estimation in construction project. During tendering, the Contractors Estimator is carrying out the estimating work due to drawing is unclear and none very detail shown in the drawing. The Estimator has to make his or her own assumption for estimating and pricing the tender. This factor will increase the chance of inaccuracy estimation for the particular construction project. 2.1.1.7 Computerised estimating software The actual use of the computer for estimating is varied within the construction industry. Some companies use the computer for all construction projects with a high degree of sophistication and some other companies do not use the computer at all. (Stephen D. Schuette Roger W. Liska, 1994) Phuwadol Samphaongoen (2009) defines detailed cost estimating as a cumbersome process that involves a lot of data and calculations, improvement in technology that could assist the estimating process of construction cost; it could reduce the Estimators work load. Computers are considered to be effective tools for assisting Estimators during the pre-tendering and tendering stage. Computers provide many benefits to the Estimator, which including reducing estimation errors, the time required and others. Stephen D. Schuette Roger W. Liska (1994) state that the early uses of computers by the construction companies were limited to accounting functions only. The improvement in micro-computering have increased knowledge of computer capabilities construction managers have begun to use computers in everyday construction operations to make quick and accurate decisions. Phuwadol Samphaongoen (2009) states estimating technologies which include spreadsheets, builds soft, microsoft excel, cost estimating software, digitizing tablets, on-screen digitizing systems and the yet to be matured 3-D CAD parametric estimating software and others. During the detailed cost estimating process various software packages are available to assist the Estimator. The capabilities of software packages vary greatly, as some of the software is include labour, equipment and material cost databases, after the database is set up which could facilitates the estimating process. According to Phuwadol Samphaongoen (2009), a spreadsheet is a computer application which simulates a paper worksheet. It arrange for the user with cells that are compiled into rows and columns. Each of the cells can contain either text, numerical values or formula, it can be defined in cells to obtain calculated value from the related cells. Complicated mathematical calculations can be automated with a change or alter of a single cell with the use of formula in the spreadsheet. Estimator uses the worksheets to accelerate the estimating process. A template can be set up by the Estimator with saved formula in the spreadsheet. Quantity take-off calculations can perform within a spreadsheet by the Estimator. Although a spreadsheet requires a lot of input from the Estimator, it could remove the cumbersome and error prone manual calculations during the quantity take-off and pricing. As stated by Mofti Bin Marjuki (2006), Global Estimating is an estimating program which has been tailored for commercial use in the construction industry. Bills of Quantities or detailed Estimates and Cost Plans can be produce through this program. Designed primarily for use by commercial building Contractors and professional Quantity Surveyors it includes features which allow it to be used in other industries where estimating is required. The grouping columns in this program are very powerful as it can be re-sorted or analysed the entered information. For example, the estimate can be summarised to produce totals by area, block, stage, cost centre, accounting group, or any user defined set of codes. Mofti Bin Marjuki (2006) mentioned Microsoft excel can be used for contains or store industry standard cost data in the CSI format for all cost categories covering general construction. Unit prices include material and labor including labor hours. This software provide easily modify and add cost data to suit local conditions and business. Besides that, Microsoft excel instantly creates user-modifiable, onscreen or printed estimate reports including price quotation. In addition, this software can save time, improve accuracy and achieve greater success. Phuwadol Samphaongoen (2009) notes that a digitizing tablet is a computer input device which uses a stylus and a tracking surface to capture the drawing on to the computer system. The drawing traced on the tracking surface transfers point coordinates to a computer; it can be used for many different purposes which including construction cost estimating. Digitizing tablets purpose in construction cost estimating is to digitize the paper-based blueprints provided by Estimator or the designer. For example, the Estimator can get the length, perimeter and area out of the drawing by the scale provided in the blueprints. These parameters are available when using the digitizing tablet through software package. After the paper blueprints have been digitized and the quantities for all work items have been determined, the Estimator might those quantities to estimate the prices of items, but quantity takeoff using the digitizing tablets may cause errors from unstable hand during tracing the drawing. Digitizing require a large number of blueprints which is very time-consuming process to the Estimator. (Phuwadol Samphaongoen, 2009) Phuwadol Samphaongoen (2009) mentioned that 3-D computer aided drawing (CAD) models allow estimators visualize what is going to be built in the 3-D environment. The ability to digitally extract and transfer data can speed up and facilities the cost estimating process. A building model allow the user seen in many different views which include details of elements, dimensions can be extracted and transferred to the estimating software. In the other hand, Stephen D. Schuette Roger W. Liska, (1994) state that the duties of the Estimator might change if the Estimators company implements the use of the computer in the estimating process. Before computerisation estimating process the Estimator spent the greatest amount of time determining the quantities of materials and performing math calculations. Computers accomplish these tasks rapidly and accurately permit the Estimator to give more attention and concentration to alternative construction methods, material supplier negotiations, predicting the productivity of labours, developing accurate cost information and bidding strategies. Everything that comes out of the computer, the Estimator should not accept it blindly or rely totally on the computerised system as the computer cannot make judgment. The result of the computerised estimate software should always review by Estimator to avoid errors and mistakes; as computerised estimate software is not immune to technical faults which are likely not to be detected easily and early as recommend by Stephen D. Schuette Roger W. Liska (1994). Besides, Stephen D. Schuette Roger W. Liska (1994) state that computerised estimate software could also be susceptible to technical limitations such as inability to work under certain conditions, inability to work well with other tools and limited to technical support. It might trigger severe consequences on estimating processes due to the Estimators lack of knowledge on those computerised estimate software. The manifestations of tool based error are which included: software construction, faults from programme performance frameworks, errors from a secondary source, end-users inappropriate use, inability to interact perfectly with other applications and others (Oluwole Allfred Olatunji, 2010). In addition, Oluwole Allfred Olatunji (2010) mentions that computerised estimate software could obsolesce of standards due to evolution of information technology in the estimating industry, especially description libraries and databases used for automatic estimating. Estimators or applications must be updated frequently when built into a programme as reference standards for reviewed, otherwise that the program possibility miss-apply the standards. 2.1.1.8 Experience or qualification of Quantity Surveyor Estimator According to Skitmore et al. (1990), expert Quantity Surveyors in the UK provided evidence of significant differences in estimating accuracy between the individual surveyors involved. Lee Holm et al. (2005) emphasise that the accuracy of estimate also will depend upon the completeness of the experience of the Estimator and others. As stated by Hira N. Ahuja Walter J. Campbell (1998), the Estimators knowledge can provide a measure of insight and accuracy that is unobtainable from any other information sources. According to Skitmore et al. (1990), construction contract price estimating practice is, with very few exceptions, heavily dependent on the skill of the Estimator. This skill is associated with the other factors affecting the quality of Estimator which are the nature of the target, information, technique and feedback and the personal attributes of the forecaster himself combining to provide the general term of expertise. Besides that, Mudd (1984, p.1-2) has described that Contactors Estimator should associated with certain qualities. These include: good basic numerate and literate education, reasonable time spent on site, interpret drawings, ability to communicate, facility to make accurate mathematical calculations, application of logic and common sense, patience, able to cope with a vast volume of paper, a working knowledge of all the major trades, close relationship with those peoples who are responsible for construction, a knack of picking up useful information, flexibility and others. 2.1.1.9 New/innovative techniques or materials The implemention of Industrialised Building System (IBS) is still not widespread in the industry despite the government has encouraged the implementation of IBS towards reducing percentage of foreign works and improving quality, productivity, safety and competitiveness through IBS construction method. IBS is a construction process that utilises techniques, products, components or building system which involved prefabricated components and on-site installation (CIDB 200). Salihudin Hassim, Mohd Saleh Jaafar and Saiful Azri Abu Hasan Sazalli (2009) emphasise that since the first project of IBS in year 1964 till today, IBS in Malaysia is not well accepted by the construction parties because of failure to adequately deal with risk in the IBS projects. The failure to keep in cost estimate in IBS project is still common in Malaysia and it is one of the reasons that limit the development of IBS in Malaysia construction industry. 2.1.1.10 Availability of historical price data Martin Brook (2008) states that estimating method used for cost planning and estimating which relies on historical cost data during early stages, whereas current price apply by analytical estimating approach to resource for a well-developed design. According to Hira N. Ahuja Walter J. Campbell (1998), most established companies make it a policy to keep records of actual costs incurred on their various construction projects. As mentioned by Hira N. Ahuja Walter J. Campbell (1998), the estimator can determine if his estimated costs were accurate or otherwise through comparing records of estimated costs. If the estimating is not accurate, whether the discrepancy was due to Estimator own lack of expertise or rather or incur some unforeseen cost-incurring conditions. In addition, these cost records serve another function, which is providing reliable cost data for preparing future estimates. Tang Wai Kuen, Raymond (2005) mentions that there are several historical databases available that provide current values for estimating costs of the several units of work for a project. The example of historical price data such as databases are collect from records of actual project costs or companys own past experience and on-going price quotations from suppliers and are published annually in the form of books, CDs and computer-based extranets. Stephen D. Schuette Roger W. Liska (1994) suggested that it is important that accurate database information be received to develop the estimating information data bank. In the other hand, according to Tang Wai Kuen, Raymond (2005), applying published data or software database pricing without first adjusting for the particular aspects of the project it might cause underestimate or overestimate. As every project in construction is unique as every project with a distinct set of local factors, for examples; size of project, level of competition, flexibility of specifications, work site, working hour restrictions and others. As stated by Tang Wai Kuen, Raymond (2005), review each line item by the professional Estimator is required when the estimating system attached to a price database, such review for the Estimator to make sure it is applicable. Inaccurate estimates can be caused by the Estimator applying these database prices blindly. Historical can be constitute a major cause of inaccurate cost estimate, if the Estimator store incorrect or inaccurate data as price database, in future the Estimator prepare cost estimation relies on inaccurate or incorrect database cause occurrence of inaccurate estimate. On the other hand, Hira N. Ahuja Walter J. Campbell (1998) emphasis, estimating publications have increase greatly, these guides are invaluable to Estimators who do not have access to actual job records. Published data are useful during all stages of estimate development. Although it is certainly not recommended that published data alone be used for an entire estimate, but it is undoubtedly useful in filling in the gaps in cost information where no other source is unavailable. 2.1.2 Financial factors Aiyetan Ayodeji Olatunji (2010) states that the performance of construction projects negatively affects by financial risk, financial risks might include high inflation and increased construction of the project. These factors affect particular projects where materials and goods are required for construction have to be imported from foreign country. The exchange rate changes on a daily basis are high so that the interest rate subject to change, it increases the percentage or chances of inaccuracy estimation occur. As mentioned by Laeeq Hassan (2010), financial risk associated with construction projects which include paucity of funds, delay in payment and others. All construction parties or compani