Wednesday, July 31, 2019

Modern technology is enhancing social life

Technology is indispensable in solving modern problems, enhancing social life and ensuring a better future (Diamond 240). Technology is the application of scientific knowledge to reach among others industrial and commercial objectives of an organization or a society. It has cultural, organizational and technical aspects (Freeman and Francisco 142). Technology has become a powerful tool that is being used to improve social life and solve human problems like poverty and deceases. In addition, technology has increased the power of human beings to control and manipulate nature thereby enhancing our ability to adapt to the environment (Global Scenario Group Report 19).Despite all the benefits of technology, it has negative side effects like unemployment. It is therefore important to assess dangers and promises of a technology in order to formulate safeguards to eliminate its negative impacts or its misuse. It is also important to put limits on some technologies like biotechnology so as to harness their full potential without lowering the dignity of human beings (Freeman and Francisco 327).DiscussionThere are many areas in which technology impact positively in the society. The globalization of the internet for example, has extended and empowered the human network (Cisco Systems E25). It has changed positively the manner in which social, political, commercial and personal interactions are occurring. The internet presents a platform on which businesses can be run, emergencies can be addressed and individuals can be informed. It is being used to support education, science and government. (Global Scenario Group Report 24).Education has immense impact on a society. It trains the human mind to think and make right decisions. Through education, one acquires knowledge and information which can be used to solve problems like unemployment in a society (Cisco Systems E27). Technology enhances the processes of communication, collaboration and engagement which are fundamental bui lding blocks of education. It supports and enriches students learning experiences.It helps to deliver learning materials like interactive activities, assessments and feedbacks to a larger number of students faster and cheaper. In contrast to traditional learning methods which provide only two sources of expertise, that is, the textbook and the instructor which are limited in format and timing, online courses can contain voice, data and video which enhance understanding (Cisco Systems E30). Technology has thus removed geographical barriers to education and improved students’ learning experiences.Technology has facilitated the formation of global communities through social networks like facebook. This has fostered social interactions independent of geographical location (Global Scenario Group Report 27). The online communities enable sharing of valuable ideas and information capable of increasing productivity and opportunities in a society. For example, one can post to a forum to share health and treatment information with other members of the forum from all over the world. Though this kind of interaction is not physical, people are still able to share their social experiences and values more efficiently with people from diverse backgrounds.Technology has also helped to reduce poverty. Advancement in technology has led to new efficient sources of energy even to poor rural communities. The availability of cheap sources of energy opens up a society to investments and economic activities (Diamond 253). For example, technology has made it possible through genetic engineering to engineer crops that convert solar energy to fuels more efficiently.Governments are thus able to provide cheaper efficient sources of energy even to poor rural communities since solar energy is available virtually anywhere in the world (Freeman and Francisco 307).   Genetic engineering can also be used to create plants that produce valuable products like silicon chips for computers ef ficiently. This can result in improved income and living standards to members of a society. Technology can therefore bring about social revolution by enriching villages thereby attracting people and wealth from cities (Freeman and Francisco 331).Technology enables farms in remote places to function as part of the global economy. Through advancement in technology, agricultural outputs have improved thus ensuring food security. Technology helps farmers to avoid losses caused by natural disasters like drought. For example, a farmer can use a laptop enabled with a Global Positioning System to plant his or her crops with precision and efficiency resulting in high harvests.At harvest time, the farmer can use technology to co-ordinate harvesting with the availability of grain a transporter and storage facilities using mobile wireless technology. This can help to reduce losses caused by delays. The grain transporter can monitor the vehicle in-route to maintain the best fuel efficiency and s afe operation. In addition, through technology changes in status can be relayed to the driver of the vehicle instantly (Cisco Systems E34).   Technology has thus improved efficiency and effectiveness in the agricultural sector. It has enabled societies to have abundant healthy food.Modern technology is also widely used in the entertainment and travel industries. The internet has enabled people to share and enjoy many forms of recreation, regardless of their location. For example, one can explore different places interactively without having to visit them. Technology has also enabled the creation of new forms of entertainment, such as online games (Cisco Systems E36). Entertainment is important in a society since it reduce stress and problems caused by depression.Fears have been raised about some technologies such as nuclear weapons being used to cause massive destructions in the world (Freeman and Francisco 308). But the global community has the ability to enforce controls and lim its on technology use to ensure that technology is not misused.   A beneficial technology cannot therefore be abandoned when measures can be put in place to shape and direct its use. Moreover, governments in the form of regulatory institutions and professional bodies have the potential to regulate technologies that are susceptible to misuse to ensure that they do not impact negatively on values of the society (Freeman and Francisco 316).ConclusionTechnology is embedded in all aspects of our society and has extensive implications on culture and social activities. Technology has significantly improved health, agriculture, education, transport and communication sectors. These are critical sectors in any society as they contribute to development and improvement of living standards. Although some technologies might have side effects, measures can be put in place by governments and the international community to ensure that all technologies are used for the benefits of the society.Analy sis and evaluation1.The sources I used are qualified on the subject of technology and its social impacts. Cisco Corporation is a leading and credible technology firm. Its products are widely accepted all over the world. In the Cisco Corporate Social Responsibility Report of 2009 the organization outlined its key activities and how they contribute to the social welfare of the community. Cisco also offers certification courses that are very popular world wide.The Cisco Networking Academy Program is a good example of how technology can be used to enhance learning experience. In the program, the instructor provides a syllabus and establishes a preliminary schedule for completing the course content. The expertise of the instructor is supplemented with an interactive curriculum comprising of text, graphics, audio and animations. In addition, a tool called packet tracer is provided to build virtual representations of networks and emulate functions of various networking devices (Cisco Syste ms E31).Freeman and Francisco in their book give a lot of examples of how technology can be used to enhance social life. They also point out some side effects of modern technology and how they can be addressed. In addition, the publisher of this book, Oxford University Press is a credible publisher and the source can thus be relied on.Some social impacts of the modern technology are common in homes and workplaces. For example, the impacts of the internet on commerce are common. Majority of people have embraced electronic commerce and are buying goods and services online from the comfort of their homes. Electronic commerce has thus opened new doors of opportunities that are being exploited in the society.Diamond in his book explores the rise of civilization, discussing the evolution of agriculture and technology and their impact on the society. He gives clear examples of how technology has improved the social welfare of communities especially through improvements in agricultural prod uction. Some of the examples he gives are common and can easily be related to what is going around.The Global Scenario Group report is a credible source. Its main sponsors who include Stockholm Environment Institute, Rockefeller Foundation, the Nippon Foundation, and the United Nations Environment Programme are advocacy organizations in the fields of technology and the environment. The research explored the historical background of technology, the current situation and what the future might look like. It gives clear examples of major scientific discoveries that are driving technology and their potential impacts.If I had adequate time, I would do additional research to find more information on effective measures the international community can take to ensure that technology is not misused. I would particularly focus on tools the international community can employ to protect technology from irresponsible individuals like terrorists. This is because technology itself is not bad, but it is the human beings who in some cases use it irresponsibly. Therefore, if correct measures and controls are put in place technology can be used for the benefit of all in the society. This can eliminate fears and most of the side effects of technology.2a. Technology shapes institutions, values and day to day activities in our society. It affects identities, relationships, social structures and economic activities (Freeman and Francisco 316). Technology is thus inevitable in the modern world. The internet has enhanced our social, political, commercial and personal interactions enabling us to share information and ideas more efficiently. Technology has significantly improved the quality of education. It has enabled learning materials to reach a larger number of students efficiently and cheaply. An improvement in the quality of education enhances the social life of a community since it enables individuals to make creative decisions capable of solving social problems.Technology has faci litated the formation of online communities were members can share their diverse experiences and ideas. This has the potential of increasing productivity in a society. In addition, technology has helped to reduce poverty. It has resulted in efficient sources of energy even in rural areas thereby opening up rural areas for investments and developments. Modern technology has also improved efficiency in the agricultural sector therefore ensuring food security. Moreover, I discussed how modern technology has been used in the entertainment and travel industries to create new forms of entertainments like online games. Finally, I noted that although some technologies have side effects, governments and the international community have the potential to direct and control the use of technology for the benefit of the society.2b.Some of the evidences I used include the Cisco Networking Academy Program which is an example of how technology can be used to improve quality of education. The second evidence is of a farmer using a laptop enabled with a Global Positioning System to plant his crops with precision and efficiency. The example illustrates how technology can be used to improve agricultural production. The third evidence illustrates how genetic engineering can be used to engineer crops that convert solar energy to fuels more efficiently. The energy can then be used in the rural areas to create wealth and employment opportunities. Finally, I illustrated how technology has been used in the entertainment and travel industry to create new forms of entertainment like online games.2c.The major assumption I made is that the international community has the potential to control and direct the use of technology. This is only possible if there is peace and cooperation among all countries of the world. But this is not the case especially in the Middle East. The instability in countries like Iran and the existence of extremists have increased fears of technology being used to caus e massive destruction.3.Someone might ague that technology is a problem because we rely on it so much and that although it makes us better it also make us worse. My position is that in any human community there must be control and order. It is therefore the responsibility of governments to ensure that the society is protected from negative effects of technology.   Some people might point out the destructive effects of technology on the environment to ague against it. But if correct measures are put in place we will be able to assess the dangers and promises of any technology in order to formulate effective safeguards against its side effects.Works citedCisco Systems, Inc. Cisco Corporate Social Responsibility Report. cisco.com. Cisco Systems, Inc. 2009. Web. 2009.Diamond, Jared. Guns, Germs, and Steel:   The Fates of Human Societies. New York.   W.W. Norton. 1999. Web.Freeman, Chris and Francisco Louca. As Time Goes By:   From the Industrial Revolutions to the Information Re volution. England. Oxford University Press. 2001. Web.Global Scenario Group Report. Great Transition: The Promise and Lure of the Times Ahead.     gsg.org. Global Scenario Group. Web. 2002.

Money spent on weapons is largely wasted Essay

Many countries have engaged in programs of purchasing and manufacturing weapons. Countries spend a lot of money in these weapons manufacture. There have been heated debates that have risen as a result of the expenses that the countries incur. There are those who argues that the large sum of money spent in manufacturing weapons could be used in other sectors of economy that would help the citizens in a more direct way for example education and health sector. On the other hand, there are those who argue that it is good for the countries to spend the money because weapons act as security for the people. This paper discusses the issue that money spent in weapons is largely wasted. The first part of the paper discusses why money spent on weapons is largely wasted and the second part will look at reasons as to why the money for the weapons is not wasted. There are basic needs that people in a country need. There are food, quality shelter and clothing, which they cannot live without. Many countries spend so much money on weapons while their citizens are dying of poverty. Instead of spending money on changing the lifestyle of the people, most governments spend large sums of money to buy weapons. Most of the weapons that many governments spend money to buy are not even for the security of the nation but for power protection. The governments use the weapons to suppress any opposition that it might be facing within the country. The money used in money rooted from the country banks and money raised from tax (Smith, 1989). For example, what used to happen in Iraq during the reign of Saddam Hussein where it is he spent a lot of money to manufacture and purchase weapons for his own power protection. Many of the well known world dictators also spend more money on weapons than they use for the welfare of their people (Cleave, 2001). War arises as people fail to agree on various important aspects for example boundary conflicts or political differences. This means that war is a man made thing because it is the people who decide to engage in war. There are many ways that can be used to solve conflicts without engaging in war. Despite the fact that war is one of the means that can be used to solve conflicts it should always be used as the last option in any conflict resolution management and transformation. Other peace initiatives such as use of dialogue, mediation and arbitration between the conflicting sides are cheaper and healthier than war. Therefore, governments allover the world should concentrate on making people aware of importance of peace maintenance and on conflict resolutions. This would be more logic and cheap that spending billions of dollar on weapons to be used on wars (Quinlan, 2009). Peace education and awareness would not cost much because the most important thing is just to come up with programs on how this would be carried. On the other hand, weapons are so much expensive because they entail either importing them or manufacturing them, which is very costly because of the labour and the technology use in their manufacture. Therefore, it would be waste of money on the weapons for war instead of using some other ways, which are more cheap and healthier to solve conflicts. Weapons are destructive in their making and in the way that they are used. During wars, there are a lot of destructions that are done by the use of weapons both to human and to infrastructure. Many lives are lost as a result because of destructive weapons. Countries undergo also of loses as a result of war. Most countries that have ever engaged in wars have repercussions that are difficult to resolve in their economies. They spend a lot of money in the reconstructions. Therefore, there is lack of logic to spend so much money to purchase or manufacture weapons that would cause a lot of destructions that would require even more money to restructure. This is double loss to the country because once the weapons are used they cannot be reused again. The money spent on the weapons and also in reconstruction of the damages caused could be used in other development in a country (Great Britain. Parliament. House of Lords, 1990). However, on the other hand money spent on weapons is not waste. This is because many countries are faced with threats from outside and, therefore, they need to be on alert all the time and be armed. In the world we are living today, there are a lot of threats to national security, for example, terrorists. These are threat, which can attack a nation even without prior signs. Therefore, if a country is got unaware there might be bad repercussions, as the country cannot defend itself if it does not have enough arms to face the enemies. It is therefore advisable for countries to have sophisticated weapons, which are able to protect the country from enemies such as terrorist who use modern weapons. This would act as a way of restoring the pride and sovereignty of a country (Needler, 1996). Weapons manufacturing has also become an industry that many people are employed in and a sector, which is used to gauge the rate of development of a country. Many people are employed in weapon industries where they work in various sectors of the industry (McNaugher, 1989). This helps to raise the living standards of the people. A country, which invests more on this industry, offers more employment opportunities to its people. People in a country, which have sophisticated weapons, have a sense of security as they feel that they have enough protection. Therefore, the money that their countries spend on the weapons is not a waste to them but acts as a source of security and also an investment where they can get jobs. It is also worth to spend much money in weapons if that is what other countries are doing. This is because if other countries have sophisticated weapons which another country does not have this is a threat to the country because if anything happens and the countries engage in war it is to the disadvantage of the country without enough weapons. Therefore, much spending on weapons is not waste as this is a trend that many countries have taken even as technology continues to develop. This is just the same way countries are spending so much money in modern technology, for example, in buying computers and other modern technology equipments (Forest, 2006). Therefore, as the debate continues governments from various countries have their own reasons as to why they have to spend so much money on weapons. However, it is important for any government to spend money equitably in all its sectors so that it does not spend much on weapons and forgets other sectors, which are basic for the country. This would make the people not to see as if their government is wasting money on weapons. References Cleave, J. (2001) Christianity: behaviour, attitudes & lifestyles, New York, Heinemann. Forest, J. (2006) Homeland Security: Public spaces and social institutions, Vol 2, New York, Greenwood Publishing Group. Great Britain. Parliament. House of Lords. (1990) The parliamentary debates (Hansard): official report, Volume 531, H. M. S. O. McNaugher, T. (1989) New weapons, old politics: America’s military procurement muddle, New York, Brookings Institution Press. Needler, M. (1996) Identity, interest, and ideology: an introduction to politics, New York, Greenwood Publishing Group. Quinlan, M. (2009) Thinking about nuclear weapons: principles, problems, prospects Oxford University Press US. Smith, J. (1989) The world’s wasted wealth: the political economy of waste, Michigan, New Worlds Press.

Tuesday, July 30, 2019

The method that built science

Science is no easy enterprise unless the decay in the inquisitive mind of man takes the form of a lingering revulsion against one of humanity’s most productive disciplines. The scientific method is by all means the cornerstone in the advancement of the major as well as the minor theories and derived knowledge in the scientific world. Dating back to decades in its progression, the utilization of the scientific method has led to a number of refinements in the established principles in the domains of science as well as the refinement in the scientific method itself. In effect, the mutual benefit gained from the application of the scientific method with regards to the analysis of numerous scientific cases and to the broad investigations that underline the basic precepts and consequent principles has placed an edge over the credibility of the science. This is in contrast to the several other means that are apart from and exclusive to the scientific enterprise in obtaining vital as well as crude information with regards to the natural and physical realm. Hence, in order for one to be able to effectively utilize the scientific method, a look into its parts and details is essentially contributive inasmuch as it is beneficial not only to the individual employing the method but also to the community in general. The initial step in the scientific method is commonly identified as observation which refers to the use of the sensory perception or oftentimes with the aid of specific instruments in examining the phenomena contained within the physical or natural environment. After arriving at a description of an event or a set of events or objects, a tentative and educated explanation of the observed event then proceeds. This process is oftentimes referred to as the formulation of the hypothesis which provides a partial, unofficial and unverified elucidation on the observed phenomena. With the hypothesis already in hand, what transpires next is the actual testing of the tentative explanation. This is done through the process of experimentation with all of the necessary material and equipments utilized in order to arrive at the resulting data. The resulting data after the experiment is then gathered and recorded so as to have a list of available information that will serve as the background for the hypothesis. Before arriving at any set of final conclusion with regards to the phenomena, an interpretation of the resulting data is necessary. This step provides the crucial link that stands between the conclusion, oftentimes coming in the form of a generalization, and the data collected from the experiment. Further, the interpretation of the data can be done in several ways, largely depending upon the type of data gathered and the domain in science under which it falls. Generally, the interpretation of the data yields the necessary bases or sets of premises that will be generalized and placed in support of the conclusion. With all the essential data already acquired as well as the interpretations of these sets of data from the variables provided in the experiment, a generalization of all these then follows. The conclusion serves as the pinnacle of the scientific method that started from mere observation of phenomena. Not only does the conclusion fits as the highlight of the scientific method, it also serves as the fundamental verifying statement or statements for the hypothesis, thereby granting the formulated hypothesis either with a substantiated and authenticated merit or a falsifying remark. There, too, are instances wherein the hypothesis is left hanging by the conclusion as the latter oftentimes arrives at a differing point whereas the hypothesis remains inconclusive although experimentation has already been performed. In general, the scientific method along with its intricate steps has provided an extra muscle for the scientific community to be able to better shape its scopes and foundations. Being able to understand the underlying steps in the scientific method is an essential and useful means in arriving at a more concrete exploration of numerous phenomena and the domains in which they fall under. Reference Kramer, S. P. (1987). How to Think Like a Scientist: Answering Questions by the Scientific Method. New York: HarperCollins. Â  

Monday, July 29, 2019

Progressive Movement and the New Deal Essay Example | Topics and Well Written Essays - 1000 words

Progressive Movement and the New Deal - Essay Example Social and economic change was not only necessary but vital towards the success of America itself as the government was extremely ambitious to implement these reforms. While recession continued to haunt countries worldwide, the progressive movement and the new deal established a solid ground for fundamental change. The progressive movement and new deal were similar because they introduced new programs, embedded reform and paved the way for greatness for the nation. The progressive movement and the new deal in essence were similar in many ways. First and foremost was the fact that it halted the progress of rebellion that was about to up heal during the industrial era. The progressive movement introduced new economic programs partly due to the fact that the muckrakers quest to address the ills of the society that were ruining the nation. The effort to improve society was a major push that was new to the nation as key issues such as reforming working conditions and assisting the mentall y ill people challenged the whole notion of old traditions. The era of progressivism movement can be Progressivism at its best aimed to remove corruption by imposing child labor laws, addressing lynching based on racism, and removing politicians that were involved in illegal business practices. At the local level, progressivism continued to display brilliant signs of reform as the construction of schools was pushed, political machines were dissolved, and politics was addressed in an efficient manner. Similarly, the new deal, was a focal point of domestic reform. Roosevelt’s New Deal was considered of two phases that were planned to recovery and relief. The first phase concentrated on to heal society from Great Depression through different channels. Acts such as the Agricultural Adjust Administration clearly boosted agriculture reform. Bank reform occurred in Federal Deposit Insurance Corporation (FDIC), which installed a limit and tightened finance regulations.  

Sunday, July 28, 2019

Apple Case Analysis Term Paper Example | Topics and Well Written Essays - 1250 words

Apple Case Analysis - Term Paper Example Though Apple may be thriving on its success in different segments specially its non-PC segment however, it is confronting with certain issues which can critically affect its operations in the future. Since late 1990s, the overall share of Apple in personal computers market is consistently below 5% which is far below than Apple’s traditional competitors. It seems that the buyers of Apple are too much focused on its non-PC products and consumers are willing to buy those products. The competitive products against Apple Mac are reasonably priced therefore consumers tend to prefer them over Mac due to price related variables. Further, technology is changing fast in the area where Apple is operating and historically firms tend to lose if they don’t keep up with the changes in technology. Whether Apple will be able to keep pace with this technology is something which everybody likes to explore. Situation Analysis External Analysis Apple’s competition is of international nature in its Mac segment of personal computers wherein it sells its PCs either through its own flagship stores, electronic retailors as well as through its website. The overall range of personal computers of Apple includes desktops, laptops as well as smaller mini notebooks. There can be different factors which may at play and can directly affects the way Apple operates in the industry. It is critical to note that Generation Y is becoming technology oriented with ownership of at least one PC is considered as essential because PC is now a days being used not only for entertainment purposes but for improving the productivity as well as to have access to information. What however, has changed over the period of time is the fact that consumers tend to favor those manufacturers which conserve the environment in their overall manufacturing process. The reusability of the material as well as its ability of not harming the environment is what is making the difference. It is also critical to note that the overall revenue of the industry is on rise since last decade except a slight dip during 2009. It may therefore can be safely assumed that industry is growing and there is relatively better potential for the existing players to expand with little bit of more innovation and creativity. A Porter Five Forces Analysis of Personal Computer Industry would suggest that buyers have higher bargaining power because of low switching cost. Suppliers also tend to have higher bargaining power due to technological sophistication as well as expertise they held in terms of providing the required hardware and software components to manufacture a personal computer. Providers such as Intel tend to have monopoly over certain critical components required to manufacture a PC and therefore tend to have more bargaining power. The overall threat of new substitutes is relatively low because industry is dominated by large players and the overall capital expenditure required is relatively high. Threats of substitutes can be significant especially in the wake of latest changes in the technology allowing smaller and compact tablet PCs to emerge as alternatives. Apple’s own IPAD is considered as a gadget which can actually kill personal computers therefore going forward there can be significant threats of substitutes. As a result of the technological developments and new market dynamics, the overall rivalry has intensified

Saturday, July 27, 2019

Toward the 21st century Essay Example | Topics and Well Written Essays - 2500 words

Toward the 21st century - Essay Example More than a few educational traditions, above all the constructivist and relativist paradigms, assert ethnographic research as an applicable research method. The aspect of immigration can be seen from this aspect. (Knott, 2005) In the context if migration US has been a very popular destination in world history. However, the analogy between black immigrates' experience in the US cities and that of their white European predecessors was mainly invalid. While looking into the matter it can primary be mentioned that migration is a subject that is studied on all levels when dealing with humanity and its idiosyncrasies. In order to understand migration we must understand the various components involved in migration, including internal migration, external migration, immigration and both refugees and Internally Displaced Persons. We must attempt to understand the reasons to migrate and how laws affect the various forms of migration and if there would be solutions to this practice of migration. The objective is to study the problems, the solutions and the reasoning behind migration as a whole. In order to understand the reasons behind migration of people we must first define the various components of migration. ... lopment of the country that people are leaving, specifically GDP, the level of domestic development, and finally income and quality of life within the countries. Another two factors include how urbanized an area would be and variations in that consideration along with levels of education that would in fact be available for children across the country of origin in place of isolated areas. Occasionally, the amount of US influence on a country can either adversely or conversely affect the amount of migration. (Fletcher, 2003) Coerced and free migration is a subject David Eltis and others have pursued for a better understanding of the methods and results of such actions. This tells about how the method behind migration has changed over the centuries of humanity. According to Eltis, "all migration hinges on a cultural differential between donor and recipient societies." (Eltis, 2002, 3) This is evident in the migration between poorer and more prosperous countries and free and restricted countries as well. This alone would give reason for migration to occur. The white migration to America started quite early. British merchant ships were trading on a regular basis with North America and West Indies (after the acquisition on Virginia in 1607 and Barbados in 1625) and by the end of the 17th century , a huge number of people (apprx 350,000) managed to emigrate across the Atlantic Ocean with these very ships. These people base helped to propagate and facilitate new markets for trade and commerce from England. In 1977, the British had the largest occupation America as compared to the other leading European powers, the French and the Dutch. The British suffered a great setback when the Independence war lost 13 of its American colonies but compensated with more acquisitions by

Friday, July 26, 2019

Sixth Amendment Essay Example | Topics and Well Written Essays - 250 words - 1

Sixth Amendment - Essay Example Following a verdict, the guilty person can file for a plea in the federal courts. This happens in situations where the accused is dissatisfied by the verdict. However, the court of appeal can accept or decline the case depending on the facts presented by the appellate (Smith, 2008). In following the right procedures, the court accords the guilty an appeal. The person has to prove the violation of his rights and his innocence in an appeal. The Sixth Amendment right to trial by a jury enhances fairness by focusing on the inequities in application of law. It prevents impartial jurisdiction. As such, its enactment has decreased cases of violation of privileges associated with the accused. According to Smith (2008), a jury trial ensures that the judges are impartial when delivering their verdict. Consequently, the nature of the case dictates to the judges their powers in trail. For example, judges with cases of corruption cannot sentence criminals. The judge does not have the autonomy to make decisions in a case. The jury safeguards the privileges of the accused regardless of the crime committed. Before trail, the accused should know the person who is accusing them. The law gives the accused an opportunity to face complainant in a court. To some extent, the accuser cross-examines the person he/she is accusing. Historically statements outside courts influenced decision in English courts. Therefore, the enactment of the right to confront witness ensured that the judge is not lead by the sentiments made outside courts. The 12-member jury was constituted because of its benefits to the judiciary and the public. It also safeguarded the constitution of the United States. It was an impartial body because it enhanced fairness during the reign of Charlemagne. Consequently, I agree with the privilege to a trial because it enhances fairness. It allows

Social Networking Practices and Interactions Essay

Social Networking Practices and Interactions - Essay Example The paper tells that social sites to have a contrast with the real lives that people lead. The likes and interests often posted on social sites are only meant to give people social status but are different from the reality; thus firms will be targeting virtual people who will eventually not consume the products. People fall into various social classes, and the way in which one person lives is not the manner in which the next person will live. But in social media due to the imminent influence, people always strive to fit in certain social classes just to stay in touch with the current trends in the world. For instance, if a new fashion line of designer clothes is released and someone posts a photo wearing that clothes his/her followers will always be influenced by purchasing the same cloth to be at a bar with the current trend. Social media is emerging to be essential in business as it is perceived to be the marketing platform that takes heed of the needs of consumers. The 21st centur y is characterized by massive technological advancements that have led to the emergence of new methods in which people interact. Previously people were limited to telephone calls or at least sending emails and thus was a preserve of a few individuals with the internet connection. Today, virtually every locality inhabited by people has internet signal and thus people are accessing the World Wide Web from anywhere. With a technological improvement in mobile gadgets such as smartphones, tablets, iPads and even microcomputers, many people have gained access to the internet connection. This has revolutionized the modern era of information processing, and dissemination has become almost instant. Online presence in social sites like Facebook, Twitter, Instagram, WhatsApp has been soaring by each day now running into hundreds of million subscribers, thus creating a new niche of consumers for businesses with an online presence.

Thursday, July 25, 2019

Leadership and culture Coursework Example | Topics and Well Written Essays - 1000 words

Leadership and culture - Coursework Example These roles may come with challenges. The manner in which leaders overcome these challenges would depend majorly on major traits and philosophies within a given leader. The major concepts of this theory are the personal composition of leader based on their physical appearance and cultural background. These cultures mould a leader and aid him or her in matters decision making and management of a given organization. Leaders aligned to this theory tend to focus more on their intellectual ability to manage a given situation. The most important aspect in this theory is the cognitive ability leader in determining what is essential at any particular period (Northouse, 2010). The leader is guide by principles within a given organization and she or he utilizes them to attain specific goals. A leader under this theory may create his own environment to influence his or her skill or modify existing environments and enable him manage a given organization. There are two major roles of a leader within a given organization. These roles include conflict resolution where the leader is required to guide an organization through challenges (Northouse, 2010). The second role entail mentorship where a leader is required to mentor individuals based on their personality and use their traits to ensure the process is a success. The two roles determine a leader through the trait theory. The theory outlines how leaders are separated through specific characteristics. The theory outlines the nature in which a leader succeeds of a given leader is determine by his or her behaviours. These behaviours are essential in a given organization. The theory organizes leaders into three major categories. The first category identify leaders that aim at gaining control over a given group and that the concrete on an entire organization and uses its structure to plan. The second set of leaders under this theory is those who are interested in the wellbeing of the organization and

Wednesday, July 24, 2019

MICROECONOMIC Essay Example | Topics and Well Written Essays - 1750 words

MICROECONOMIC - Essay Example What is the marginal cost and marginal benefit of another hour’s ticket inspecting? ? PAY/?HOURS=0. However, the marginal benefit for the first hour is high, while the marginal cost is low. In the second hour, the marginal benefit is high but lower than the marginal benefit obtained in the first hour (Heijdra and Ploeg, 2002, p.124). On the other hand, the marginal cost for the second hour is higher than the marginal cost of the first hour. This trend continues as the number of hours progresses. Thus, parking inspectors can only increase or decrease the number of checks depending on their levels of satisfactions. c) What would happen to Deakin’s revenue from parking (income from permits and fines) and to Deakin’s costs of enforcing parking restrictions if the likelihood of being fined if you don’t have a permit is 100%? What about if it is 0%? The revenues of the company are likely to increase if a fine for lack of permits is 100%. However, people may beco me vigilant and decide to obtain the permits. In such a case, the revenues would only depend on individuals found without permits. The costs would still remain the same since the company would have finances for its operations. In case the fine for not having permits is 0%, the revenues of the company would decline. It would depend on other revenue streams. Additionally, costs for offering parking services would use up the finances of the company. d) Minimizing the costs of parking Daily fees =$ 6 Total number of days =96. Total cost of parking for the student = cost of daily fees x number of days attended =6 x96= $596. The student is better off not buying daily permits because it will cost higher than the inclusive triple Tri-semester parking fee which is =$125. On the other hand, the student will be better of buying the yearly permit which costs $250. This is because in a year, Tri-semester fees will cost 125 x3= 375. Question 2: Prices and revenue a) Draw demand and supply diagram s to illustrate these two different markets. Make sure you label your diagrams, their axes and all relevant points. Price/charge supply curve y P2 P1 x Original dd curve 0 A2 A1 Number of car park permits According to the advice by the economists, reducing the number of permits from A1 to A2 would result in to more revenues. At A1 number of permits, the revenues that Deakin collects is represented by area A1 0 P1 X. When the number of permits is reduced to the level A2, t he price/charge increases from P1 to P2. The revenue collected after the change in number of permits is represented by A2 0 P2 Y. The economists estimates that the area A2 0 P2 Y will be larger than A1 0 P1 X. If it is larger, then the Deakin’s can increase revenues by reducing the number of permits. In other words, incase the recommendations by the economists is implemented, the quantity of permits will reduce, the prices charged will increase and the revenues will increase. b) Why does restricting the numb er of permits increase revenue for Deakin, but reduce it for the council? Explain what factors might bring about the different results in the two markets. According to the advice given to the local council, reduction in the number of permits will result in to reduction of revenues. This means that, if the council reduces the number of per

Tuesday, July 23, 2019

Assignmentbiology questions to answer Essay Example | Topics and Well Written Essays - 1250 words

Assignmentbiology questions to answer - Essay Example In starch, all the glucose repeat units are oriented in the same direction. But in cellulose, each successive glucose unit is rotated 180 degrees around the axis of the polymer backbone chain, relative to the last repeat unit. Cellulose contains beta linkage and enzymes cannot act upon this linkage, thus maintains the structural integrity, whereas Starch contains alpha acetal linkage. Enzyme can act upon this linkage to produce amulose and amylopectin thereby releasing glucose, source of energy. c. Alpha-glycosidic bond and beta-glycosidic bond d. Amylase acts specifically upon alpha acetal linkage (present in starch). The linkages in celluloses are beta-acetal on which amylase cant act. 2. a- tick mark b- cross mark 3. Calcium- gives strength to the bones Iron- Is an important element in hemoglobin, responsible for oxygen transfer. Phosphate- necessary element in

Monday, July 22, 2019

Brita Case Essay Example for Free

Brita Case Essay The Brita Products Company began in 1988 under the recommendation of Charlie Couric, a marketing executive with the Clorox Company. Optimistic of its capability to be profitable, Clorox acquired the right to market the home water filtration system. Clorox, citing the overriding long-term benefits of continuous filter sales, initially engaged in deficit spending. Such measures paid off and Clorox not only created a $350 million market, but also captured 70% of the market revenue. Brita enjoyed success in the market by creating a perception of better tasting water. However, as ater purification technology improved and consumer awareness increased, taste alone was no longer enough to sustain its massive market share. Consumers are demanding more in terms of health benefits and Brita needs to respond to their growing needs and wants. The market environment is characterized by fast growth. As consumers are becoming more health-conscious, bottled water and water- filtration systems are becoming a necessity for most, with a Brita pitcher in 1 out of 7 homes 103 million households. Britas competitors were unable to effectively rival Brita in pitcher sales. Brita dominated despite many new entrants to the market. However, a small competitor, PUR, launched a different water filtration product. PURs faucet-filter system offered added health and convenience benefits that Britas pitcher couldnt provide. Now suddenly, our competitors came up with the first mover product. Thus Couric is considering allocating resources to launch a faucet-mounted filtration system in response to these emerging competitors. Many think Brita needs to capitalize on this opportunity to gain new consumers while their name still remains synonymous with quality and taste. Thus raising the question, how should Brita attempt to further penetrate the market with their products? Lets take a look at the Pros and Cons of each option: Option 1: Implement the new faucet mount filtration system The Purpose This writing aims to present one possible solution to the dilemma that Clorox Company faces. The Clorox Company was the market leader in water filtration in the USA with the Brita Pitcher (one of the Cloroxs most important product), but in 1999 they faced the threat of a new product the faucet mounted filter. Clorox already had its own version of this new product ready to launch into the market, so the issue was to decide the best of the following strategies: 1 . Continue selling only the current product; 2. Introduce their new faucet mounted filter in addition to the pitcher into the market 2. The Analysis Market Summary Clorox launched in 1988 the Brita Pitcher and after a decade they were the market leaders of water filtration systems with a market share of 69%. After the Brita pitcher launch, the water quality became a growing concern to consumers. This new attitude about the quality of drinking water allowed the purified water market to grow in both bottled water and filter systems. This growing on the water market, allowed Clorox Has not defined product Market segmentation

Sunday, July 21, 2019

Advances in DNA Sequencing Technologies

Advances in DNA Sequencing Technologies Abstract Recent advances in DNA sequencing technologies have led to efficient methods for determining the sequence of DNA. DNA sequencing was born in 1977 when Sanger et al proposed the chain termination method and Maxam and Gilbert proposed their own method in the same year. Sangers method was proven to be the most favourable out of the two. Since the birth of DNA sequencing, efficient DNA sequencing technologies was being produced, as Sangers method was laborious, time consuming and expensive; Hood et al proposed automated sequencers involving dye-labelled terminators. Due to the lack of available computational power prior to 1995, sequencing an entire bacterial genome was considered out of reach. This became a reality when Venter and Smith proposed shotgun sequencing in 1995. Pyrosequencing was introduced by Ronagi in 1996 and this method produce the sequence in real-time and is applied by 454 Life Sciences. An indirect method of sequencing DNA was proposed by Drmanac in 1987 called sequen cing by hybridisation and this method lead to the DNA array used by Affymetrix. Nanopore sequencing is a single-molecule sequencing technique and involves single-stranded DNA passing through lipid bilayer via an ion channel, and the ion conductance is measured. Synthetic Nanopores are being produced in order to substitute the lipid bilayer. Illumina sequencing is one of the latest sequencing technologies to be developed involving DNA clustering on flow cells and four dye-labelled terminators performing reverse termination. DNA sequencing has not only been applied to sequence DNA but applied to the real world. DNA sequencing has been involved in the Human genome project and DNA fingerprinting. Introduction Reliable DNA sequencing became a reality in 1977 when Frederick Sanger who perfected the chain termination method to sequence the genome of bacteriophage ?X174 [1][2]. Before Sangers proposal of the chain termination method, there was the plus and minus method, also presented by Sanger along with Coulson [2]. The plus and minus method depended on the use of DNA polymerase in transcribing the specific sequence DNA under controlled conditions. This method was considered efficient and simple, however it was not accurate [2]. As well as the proposal of the chain termination sequencing by Sanger, another method of DNA sequencing was introduced by Maxam and Gilbert involving restriction enzymes, which was also reported in 1977, the same year as Sangers method. The Maxamm and Gilbert method shall be discussed in more detail later on in this essay. Since the proposal of these two methods, spurred many DNA sequencing methods and as the technology developed, so did DNA sequencing. In this lite rature review, the various DNA sequencing technologies shall be looked into as well their applications in the real world and the tools that have aided sequencing DNA e.g. PCR. This review shall begin with the discussion of the chain termination method by Sanger. The Chain Termination Method Sanger discovered that the inhibitory activity of 23-didoxythymidine triphosphate (ddTTP) on the DNA polymerase I was dependent on its incorporation with the growing oligonucleotide chain in the place of thymidylic acid (dT) [2]. In the structure of ddT, there is no 3-hydroxyl group, by there is a hydrogen group in place. With the hydrogen in place of the hydroxyl group, the chain cannot be extended any further, so a termination occurs at the position where dT is positioned. Figure 1 shows the structure of dNTP and ddNTP. Sanger discovered that the inhibitory activity of 23-didoxythymidine triphosphate (ddTTP) on the DNA polymerase I was dependent on its incorporation with the growing oligonucleotide chain in the place of thymidylic acid (dT) [2]. In the structure of ddT, there is no 3-hydroxyl group, by there is a hydrogen group in place. With the hydrogen in place of the hydroxyl group, the chain cannot be extended any further, so a termination occurs at the position where dT is positioned. Figure 1 shows the structure of dNTP and ddNTP. In order to remove the 3-hydroxyl group and replace it with a proton, the triphosphate has to undergo a chemical procedure [1]. There is a different procedure employed for each of the triphosphate groups. Preparation of ddATP was produced from the starting material of 3-O-tosyl-2-deoxyadenosine which was treated with sodium methoxide in dimethylformamide to produce 2,3-dideoxy-2,3-didehydroadenosine, which is an unsaturated compound [4]. The double bond between carbon 2 and 3 of the cyclic ether was then hydrogenated with a palladium-on-carbon catalyst to give 2,3-dideoxyadenosine (ddA). The ddA (ddA) was then phosphorylated in order add the triphosphate group. Purification then took place on DEAE-Sephadex column using a gradient of triethylamine carbonate at pH 8.4. Figure 2 is schematic representation to produce ddA prior to phosphorylation. In the preparation of ddTTP (Figure 3), thymidine was tritylated (+C(Ph3)) at the 5-position and a methanesulphonyl (+CH3SO2) group was introduced at the 3-OH group[5]. The methanesulphonyl group was substituted with iodine by refluxing the compound in 1,2-dimethoxythane in the presence of NaI. After chromatography on a silica column the 5-trityl-3-iodothymidine was hydrogenated in 80% acetic acid to remove the trityl group. The resultant 3-iodothymidine was hydrogenated to produce 23-dideoxythymidine which subsequently was phosphorylated. Once phosphorylated, ddTTP was then purified on a DEAE-sephadex column with triethylammonium-hydrogen carbonate gradient. Figure 3 is a schematic representation to produce ddT prior phosphorylation. When preparing ddGTP, the starting material was N-isobutyryl-5-O-monomethoxytrityldepxyguanosine [1]. After the tosylation of the 3-OH group the compound was then converted to the 23-didehydro derivative with sodium methoxide. Then the isobutyryl group was partly removed during this treatment of sodium methoxide and was removed completely by incubation in the presence of NH3 overnight at 45oC. During the overnight incubation period, the didehydro derivative was reduced to the dideoxy derivative and then converted to the triphosphate. The triphosphate was purified by the fractionation on a DEAE-Sephadex column using a triethylamine carbonate gradient. Figure 4 is a schematic representation to produce ddG prior phosphorylation. Preparing the ddCTP was similar to ddGTP, but was prepared from N-anisoyl-5-O-monomethoxytrityldeoxycytidine. However the purification process was omitted for ddCTP, as it produced a very low yield, therefore the solution was used directly in the experiment described in the paper [2]. Figure 5 is a schematic representation to produce ddC prior phosphorylation. With the four dideoxy samples now prepared, the sequencing procedure can now commence. The dideoxy samples are in separate tubes, along with restriction enzymes obtained from ?X174 replicative form and the four dNTPs [2]. The restriction enzymes and the dNTPs begin strand synthesis and the ddNTP is incorporated to the growing polynucleotide and terminates further strand synthesis. This is due to the lack of the hydroxyl group at the 3 position of ddNTP which prevents the next nucleotide to attach onto the strand. The four tubes are separate by gel-electrophoresis on acrylamide gels (see Gel-Electrophoresis). Figure 6 shows the sequencing procedure. Reading the sequence is straightforward [1]. The first band that moved the furthest is located, this represents the smallest piece of DNA and is the strand terminated by incorporation of the dideoxynucleotide at the first position in the template. The track in which this band occurs is noted. For example (shown in Figure 6), the band that moved the furthest is in track A, so the first nucleotide in the sequence is A. To find out what the next nucleotide, the next most mobile band corresponding to DNA molecule which is one nucleotide longer than the first, and in this example, the band is on track T. Therefore the second nucleotide is T, and the overall sequence so far is AT. The processed is carried on along the autoradiograph until the individual bands start to close in and become inseparable, therefore becoming hard to read. In general it is possible to read upto 400 nucleotides from one autoradiograph with this method. Figure 7 is a schematic representation of an autoradiograph. E ver since Sanger perfected the method of DNA sequencing, there have been advances methods of sequencing along with the achievements. Certain achievements such as the Human genome project and shall be discussed later on in this review. Gel-Electrophoresis Gel-Electrophoresis is defined as the movement of charged molecules in an electric field [1][8]. DNA molecules, like many other biological compounds carry an electric charge. With the case of DNA, this charge is negative. Therefore when DNA is placed in an electric field, they migrate towards the positive pole (as shown in figure 8). There are three factors which affect the rate of migration, which are shape, electrical charge and size. The polyacrylamide gel comprises a complex network of pores through which the molecules must travel to reach the anode. Maxam and Gilbert Method The Maxam and Gilbert method was proposed before Sanger Method in the same year. While the Sangers method involves enzymatic radiolabelled fragments from unlabelled DNA strands [2]. The Maxam-Gilbert method involves chemical cleavage of prelabelled DNA strands in four different ways to form the four different collections of labelled fragments [6][7]. Both methods use gel-electrophoresis to separate the DNA target molecules [8]. However Sangers Chain Termination method has been proven to be simpler and easier to use than the Maxam and Gilbert method [9]. As a matter of fact, looking through the literature text books, Sangers method of DNA sequencing have been explained rather than Maxam and Gilberts [1][3][9][10]. With Maxam and Gilberts method there are two chemical cleavage reactions that take place [6][7]. One of the chemical reaction take places with guanine and the adenine, which are the two purines and the other cleaves the DNA at the cytosine and thymin e, the pyrimidines. For the cleavage reaction, specific reagents are used for each of the reaction. The purine specific reagent is dimethyl sulphate and the pyrimidine specific reagent is hydrazine. Each of these reactions are done in a different way, as each of the four bases have different chemical properties. The cleavage reaction for the guanine/adenine involves using dimethyl sulphate to add a methyl group to the guanines at the N7 position and at the N3 position at the adenines [7]. The glycosidic bond of a methylated adenines is unstable and breaks easily on heating at neutral pH, leaving the sugar free. Treatment with 0.1M alkali at 90oC then will cleave the sugar from the neighbouring phosphate groups. When the resulting end-labelled fragments are resolved on a polyacrylamide gel, the autoradiograph contains a pattern a dark and light bands. The dark bands arise from the breakage at the guanines, which methylate at a rate which is 5-fold faster than adenines. From this reac tion the guanine appear stronger than the adenosine, this can lead to a misinterpretation. Therefore an Adenine-Enhanced cleavage reaction takes place. Figure 9 shows the structural changes of guanine when undergoing the structural modifications involved in Maxam-Gilbert sequencing. With an Adenine-Enhanced cleavage, the glycosidic bond of methylated adenosine is less stable than that of methylated guanosine, thus gentle treatment with dilute acid at the methylation step releases the adenine, allowing darker bands to appear on the autoradiograph [7]. The chemical cleavage for the cytosine and thymine residues involves hydrazine instead of dimethyl sulphate. The hydrazine cleaves the base and leaving ribosylurea [7]. After partial hydrazinolysis in 15-18M aqueous hydrazine at 20oC, the DNA is cleaved with 0.5M piperidine. The piperidine (a cyclic secondary amine), as the free base, displaces all the products of the hydrazine reaction from the sugars and catalyzses the b-elimination of the phosphates. The final pattern contains bands of the similar intensity from the cleavages at the cytosines and thymines. As for cleavage for the cytosine, the presence of 2M NaCl preferentially suppresses the reaction of thymine with hydrazine. Once the cleavage reaction has taken place each original strand is broken into a labelled fragment and an unlabelled fragment [7]. All the labelled fragments start at the 5 end of the strand and terminate at the base that precedes the site of a nucleotide along the original strand. Only the labelled fragmen ts are recorded on the gel electrophoresis. Dye-labelled terminators For many years DNA sequencing has been done by hand, which is both laborious and expensive[3]. Before automated sequencing, about 4 x 106 bases of DNA had been sequenced after the introduction of the Sangers method and Maxam Gilbert methods [11]. In both methods, four sets of reactions and a subsequent electrophoresis step in adjacent lanes of a high-resolution polyacrylamide gel. With the new automated sequencing procedures, four different fluorophores are used, one in each of the base-specific reactions. The reaction products are combined and co-electrophoresed, and the DNA fragments generated in each reaction are detected near the bottom of the gel and identified by their colour. As for choosing which DNA sequencing method to be used, Sangers Method was chosen. This is because Sangers method has been proven to be the most durable and efficient method of DNA sequencing and was the choice of most investigators in large scale sequencing [12]. Figure 10 shows a typical sequence is ge nerated using an automated sequencer. The selection of the dyes was the central development of automated DNA sequencing [11]. The fluorophores that were selected, had to meet several criteria. For instance the absorption and emission maxima had to be in the visible region of the spectrum [11] which is between 380 nm and 780 nm [10], each dye had to be easily distinguishable from one another [11]. Also the dyes should not impair the hybridisation of the oligonucleotide primer, as this would decrease the reliability of synthesis in the sequencing reactions. Figure 11 shows the structures of the dyes which are used in a typical automated sequencing procedure, where X is the moiety where the dye will be bound to. Table 1 shows which dye is covalently attached to which nucleotide in a typical automated DNA sequencing procedure Dye Nucleotide Attached Flourescein Adenosine NBD Thymine Tetramethylrhodamine Guanine Texas Red Cytosine In designing the instrumentation of the florescence detection apparatus, the primary consideration was sensitivity. As the concentration of each band on the co-electrophoresis gel is around 10 M, the instrument needs to be capable of detecting dye concentration of that order. This level of detection can readily be achieved by commercial spectrofluorimeter systems. Unfortunately detection from a gel leads to a much higher background scatter which in turn leads to a decrease in sensitivity. This is solved by using a laser excitation source in order to obtain maximum sensitivity [11]. Figure 12 is schematic diagram of the instrument with the explanation of the instrumentation employed. When analyzing data, Hood had found some complications [11]. Firstly the emission spectra of the different dyes overlapped, in order to overcome this, multicomponent analysis was employed to determine the different amounts of the four dyes present in the gel at any given time. Secondly, the different dye molecules impart non-identical electrophoretic mobilities to the DNA fragments. This meant that the oligonucleotides were not equal base lengths. The third major complication was in analyzing the data comes from the imperfections of the enzymatic methods, for instance there are often regions of the autoradiograph that are difficult to sequence. These complications were overcome in five steps [11] High frequency noise is removed by using a low-pass Fourier filter. A time delay (1.5-4.5 s) between measurements at different wavelength is partially corrected for by linear interpolation between successive measurements. A multicomponent analysis is performed on each set of four data points; this computation yields the amount of each of the four dyes present in the detector as a function of time. The peaks present in the data are located The mobility shift introduced by the dyes is corrected for using empirical determined correction factors. Since the publication of Hoods proposal of the fluorescence detection in automated DNA sequence analysis. Research has been made on focussed on developing which are better in terms of sensitivity [12]. Bacterial and Viral Genome Sequencing (Shotgun Sequencing) Prior to 1995, many viral genomes have been sequenced using Sangers chain termination technique [13], but no bacterial genome has been sequenced. The viral genomes that been sequenced are the 229 kb genome of cytomegalovirus [14], and the 192 kb genome of vaccinia [15], the 187 kb mitochondrial and 121 kb cholorophast genomes of Marchantia polymorpha have been sequenced [16]. Viral genome sequencing has been based upon the sequencing of clones usually derived from extensively mapped restriction fragments, or ? or cosmid clones [17]. Despite advances in DNA sequencing technology, the sequencing of genomes has not progressed beyond clones on the order of the size of the ~ 250kb, which is due to the lack of computational approaches that would enable the efficient assembly of a large number of fragments into an ordered single assembly [13][17]. Upon this, Venter and Smith in 1995 proposed Shotgun Sequencing and enabled Haemophilus influenzae (H. influenzae) to become the first bacterial genome to be sequenced [13][17]. H. influenzae was chosen as it has a similar base composition as a human does with 38 % of sequence made of G + C. Table 2 shows the procedure of the Shotgun Sequencing [17]. When constructing the library ultrasonic waves were used to randomly fragment the genomic DNA into fairly small pieces of about the size of a gene [13]. The fragments were purified and then attached to plasmid vectors[13][17]. The plasmid vectors were then inserted into an E. coli host cell to produce a library of plasmid clones. The E. coli host cell strains had no restriction enzymes which prevented any deletions, rearrangements and loss of the clones [17]. The fragments are randomly sequenced using automated sequencers (Dye-Labelled terminators), with the use of T7 and SP6 primers to sequence the ends of the inserts to enable the coverage of fragments by a factor of 6 [17]. Table 2 (Reference 17) Stage Description Random small insert and large insert library construction Shear genomic DNA randomly to ~2 kb and 15 to 20 kb respectively Library plating Verify random nature of library and maximize random selection of small insert and large insert clones for template production High-throughput DNA sequencing Sequence sufficient number of sequences fragments from both ends for 6x coverage Assembly Assemble random sequence fragments and identity repeat regions Gap Closure Physical gaps Order all contigs (fingerprints, peptide links, ÃŽ », clones, PCR) and provide templates for closure Sequence gaps Complete the genome sequence by primer walking Editing Inspect the sequence visually and resolve sequence ambiguities, including frameshifts Annotation Identify and describe all predicted coding regions (putative identifications, starts and stops, role assignments, operons, regulatory regions) Once the sequencing reaction has been completed, the fragments need to be assembled, and this process is done by using the software TIGR Assembler (The Institute of Genomic Research) [17]. The TIGR Assembler simultaneously clusters and assembles fragments of the genome. In order to obtain the speed necessary to assemble more than 104 fragments [17], an algorithm is used to build up the table of all 10-bp oligonucleotide subsequences to generate a list of potential sequence fragment overlaps. The algorithm begins with the initial contig (single fragment); to extend the contig, a candidate fragment is based on the overlap oligonucleotide content. The initial contig and candidate fragment are aligned by a modified version of the Smith-Waterman [18] algorithm, which allows optional gapped alignments. The contig is extended by the fragment only if strict criteria of overlap content match. The algorithm automatically lowers these criteria in regions of minimal coverage and raises them in r egions with a possible repetitive element [17]. TIGR assembler is designed to take advantage of huge clone sizes [17]. It also enforces a constraint that sequence from two ends of the same template point toward one another in the contig and are located within a certain range of the base pair [17]. Therefore the TIGR assembler provides the computational power to assemble the fragments. Once the fragments have been aligned, the TIGR Editor is used to proofread the sequence and check for any ambiguities in the data [17]. With this technique it does required precautionary care, for instance the small insert in the library should be constructed and end-sequenced concurrently [17]. It is essential that the sequence fragments are of the highest quality and should be rigorously check for any contamination [17]. Pyrosequencing Most of the DNA sequencing required gel-electrophoresis, however in 1996 at the Royal Institute of Technology, Stockholm, Ronaghi proposed Pyrosequencing [19][20]. This is an example of sequencing-by-synthesis, where DNA molecules are clonally amplified on a template, and this template then goes under sequencing [25]. This approach relies on the detection of DNA polymerase activity by enzymatic luminometric inorganic pyrophosphate (PPi) that is released during DNA synthesis and goes under detection assay and offers the advantage of real-time detection [19]. Ronaghi used Nyren [21] description of an enzymatic system consisting of DNA polymerase, ATP sulphurylase and lucifinerase to couple the release of PPi obtained when a nucleotide is incorporated by the polymerase with light emission that can be easily detected by a luminometer or photodiode [20]. When PPi is released, it is immediately converted to adenosine triphosphate (ATP) by ATP sulphurylase, and the level of generated ATP is sensed by luciferase-producing photons [19][20][21]. The unused ATP and deoxynucleotide are degraded by the enzyme apyrase. The presence or absence of PPi, and therefore the incorporation or nonincorporation of each nucleotide added, is ultimately assessed on the basis of whether or not the photons are detected. There is minimal time lapse between these events, and the conditions of the reaction are such that iterative addition of the nucleotides and PPi detection are possible. The release of PPi via the nucleotide incorporation, it is detected by ELIDA (Enzymatic Luminometric Inorganic pyrophosphate Detection Assay) [19][21]. It is within the ELIDA, the PPi is converted to ATP, with the help of ATP sulfurylase and the ATP reacts with the luciferin to generate the light at more than 6 x 109 photons at a wavelength of 560 nm which can be detected by a photodiode, photomultiplier tube, or charge-coupled device (CCD) camera [19][20]. As mentioned before, the DNA molecules need to be amplified by polymerase chain reaction (PCR which is discussed later Ronaghi observed that dATP interfered with the detection system [19]. This interference is a major problem when the method is used to detect a single-base incorporation event. This problem was rectified by replacing the dATP with dATPaS (deoxyadenosine a–thiotrisulphate). It is noticed that adding a small amount of the dATP (0.1 nmol) induces an instantaneous increase in the light emission followed by a slow decrease until it reached a steady-state level (as Figure 11 shows). This makes it impossible to start a sequencing reaction by adding dATP; the reaction must instead be started by addition of DNA polymerase. The signal-to-noise ratio also became higher for dATP compared to the other nucleotides. On the other hand, addition of 8 nmol dATPaS (80-fold higher than the amount of dATP) had only a minor effect on luciferase (as Figure 14 shows). However dATPaS is less than 0.05% as effective as dATP as a substrate for luciferase [19]. Pyrosequencing is adapted by 454 Life Sciences for sequencing by synthesis [22] and is known as the Genome Sequencer (GS) FLX [23][24]. The 454 system consist of random ssDNA (single-stranded) fragments, and each random fragment is bound to the bead under conditions that allow only one fragment to a bead [22]. Once the fragment is attached to the bead, clonal amplification occurs via emulsion. The emulsified beads are purified and placed in microfabricated picolitre wells and then goes under pyrosequencing. A lens array in the detection of the instrument focuses luminescene from each well onto the chip of a CCD camera. The CCD camera images the plate every second in order to detect progression of the pyrosequencing [20][22]. The pyrosequencing machine generates raw data in real time in form of bioluminescence generated from the reactions, and data is presented on a pyrogram [20] Sequencing by Hybridisation As discussed earlier with chain-termination, Maxamm and Gilbert and pyrosequencing, these are all direct methods of sequencing DNA, where each base position is determined individually [26]. There are also indirect methods of sequencing DNA in which the DNA sequence is assembled based on experimental determination of oligonucleotide content of the chain. One promising method of indirect DNA sequencing is called Sequencing by Hybridisation in which sets of oligonucleotide probes are hybridised under conditions that allow the detection of complementary sequences in the target nucleic acid [26]. Sequencing by Hybridisation (SBH) was proposed by Drmanac et al in 1987 [27] and is based on Dotys observation that when DNA is heated in solution, the double-strand melts to form single stranded chains, which then re-nature spontaneously when the solution is cooled [28]. This results the possibility of one piece of DNA recognize another. And hence lead to Drmanac proposal of oligonucleotides pro bes being hybridised under these conditions allowing the complementary sequence in the DNA target to be detected [26][27]. In SBH, an oligonucleotide probe (n-mer probe where n is the length of the probe) is a substring of a DNA sample. This process is similar to doing a keyword search in a page full of text [29]. The set of positively expressed probes is known as the spectrum of DNA sample. For example, the single strand DNA 5GGTCTCG 3 will be sequenced using 4-mer probes and 5 probes will hybridise onto the sequence successfully. The remaining probes will form hybrids with a mismatch at the end base and will be denatured during selective washing. The five probes that are of good match at the end base will result in fully matched hybrids, which will be retained and detected. Each positively expressed serves as a platform to decipher the next base as is seen in Figure 16. For the probes that have successfully hybridised onto the sequence need to be detected. This is achieved by labelling the probes with dyes such as Cyanine3 (Cy3) and Cyanine5 (Cy5) so that the degree of hybridisation can be detected by imaging devices [29]. SBH methods are ideally suited to microarray technology due to their inherent potential for parallel sample processing [29]. An important advantage of using of using a DNA array rather than a multiple probe array is that all the resulting probe-DNA hybrids in any single probe hybridisation are of identical sequence [29]. One of main type of DNA hybridisation array formats is oligonucleotide array which is currently patented by Affymetrix [30]. The commercial uses of this shall be discussed under application of the DNA Array (Affymetrix). Due to the small size of the hybridisation array and the small amount of the target present, it is a challenge to acquire the signals from a DNA Array [29]. These signals must first be amplified b efore they can be detected by the imaging devices. Signals can be boosted by the two means; namely target amplification and signal amplification. In target amplification such as PCR, the amount of target is increased to enhance signal strength while in signal amplification; the amount of signal per unit is increased. Nanopore Sequencing Nanopore sequencing was proposed in 1996 by Branton et al, and shows that individual polynucleotide molecules can be characterised using a membrane channel [31]. Nanopore sequencing is an example of single-molecule sequencing, in which the concept of sequencing-by-synthesis is followed, but without the prior amplification step [24]. This is achieved by the measurement of ionic conductance of a nucleotide passing through a single ion channels in biological membranes or planar lipid bilayer. The measurement of ionic conductance is routine neurobiology and biophysics [31], as well as pharmacology (Ca+ and K+ channel)[32] and biochemistry[9]. Most channels undergo voltage-dependant or ligand dependant gating, there are several large ion channels (i.e. Staphylococcus aureus a-hemolysin) which can remain open extended periods, thereby allowing continuous ionic current to flow across a lipid bilayer [31]. If a transmembrane voltage applied across an open channel of appropriate size should d raw DNA molecules through the channel as extended linear chains whose presence would detect reduce ionic flow. It was assumed, that the reduction in the ionic flow would lead to single channel recordings to characterise the length and hence lead to other characteristics of the polynucleotide. In the proposal by Branton, a-hemolysin was used to form a single channel across a lipid bilayer separating two buffer-filled compartment [31]. a-Hemolysin is a monomeric, 33kD, 293 residue protein that is secreted by the human pathogen Staphylococcus aureus [33]. The nanopore are produced when a-hemolysin subsunits are introduced into a buffered solution that separates lipid bilayer into two compartments (known as cis and trans): the head of t Advances in DNA Sequencing Technologies Advances in DNA Sequencing Technologies Abstract Recent advances in DNA sequencing technologies have led to efficient methods for determining the sequence of DNA. DNA sequencing was born in 1977 when Sanger et al proposed the chain termination method and Maxam and Gilbert proposed their own method in the same year. Sangers method was proven to be the most favourable out of the two. Since the birth of DNA sequencing, efficient DNA sequencing technologies was being produced, as Sangers method was laborious, time consuming and expensive; Hood et al proposed automated sequencers involving dye-labelled terminators. Due to the lack of available computational power prior to 1995, sequencing an entire bacterial genome was considered out of reach. This became a reality when Venter and Smith proposed shotgun sequencing in 1995. Pyrosequencing was introduced by Ronagi in 1996 and this method produce the sequence in real-time and is applied by 454 Life Sciences. An indirect method of sequencing DNA was proposed by Drmanac in 1987 called sequen cing by hybridisation and this method lead to the DNA array used by Affymetrix. Nanopore sequencing is a single-molecule sequencing technique and involves single-stranded DNA passing through lipid bilayer via an ion channel, and the ion conductance is measured. Synthetic Nanopores are being produced in order to substitute the lipid bilayer. Illumina sequencing is one of the latest sequencing technologies to be developed involving DNA clustering on flow cells and four dye-labelled terminators performing reverse termination. DNA sequencing has not only been applied to sequence DNA but applied to the real world. DNA sequencing has been involved in the Human genome project and DNA fingerprinting. Introduction Reliable DNA sequencing became a reality in 1977 when Frederick Sanger who perfected the chain termination method to sequence the genome of bacteriophage ?X174 [1][2]. Before Sangers proposal of the chain termination method, there was the plus and minus method, also presented by Sanger along with Coulson [2]. The plus and minus method depended on the use of DNA polymerase in transcribing the specific sequence DNA under controlled conditions. This method was considered efficient and simple, however it was not accurate [2]. As well as the proposal of the chain termination sequencing by Sanger, another method of DNA sequencing was introduced by Maxam and Gilbert involving restriction enzymes, which was also reported in 1977, the same year as Sangers method. The Maxamm and Gilbert method shall be discussed in more detail later on in this essay. Since the proposal of these two methods, spurred many DNA sequencing methods and as the technology developed, so did DNA sequencing. In this lite rature review, the various DNA sequencing technologies shall be looked into as well their applications in the real world and the tools that have aided sequencing DNA e.g. PCR. This review shall begin with the discussion of the chain termination method by Sanger. The Chain Termination Method Sanger discovered that the inhibitory activity of 23-didoxythymidine triphosphate (ddTTP) on the DNA polymerase I was dependent on its incorporation with the growing oligonucleotide chain in the place of thymidylic acid (dT) [2]. In the structure of ddT, there is no 3-hydroxyl group, by there is a hydrogen group in place. With the hydrogen in place of the hydroxyl group, the chain cannot be extended any further, so a termination occurs at the position where dT is positioned. Figure 1 shows the structure of dNTP and ddNTP. Sanger discovered that the inhibitory activity of 23-didoxythymidine triphosphate (ddTTP) on the DNA polymerase I was dependent on its incorporation with the growing oligonucleotide chain in the place of thymidylic acid (dT) [2]. In the structure of ddT, there is no 3-hydroxyl group, by there is a hydrogen group in place. With the hydrogen in place of the hydroxyl group, the chain cannot be extended any further, so a termination occurs at the position where dT is positioned. Figure 1 shows the structure of dNTP and ddNTP. In order to remove the 3-hydroxyl group and replace it with a proton, the triphosphate has to undergo a chemical procedure [1]. There is a different procedure employed for each of the triphosphate groups. Preparation of ddATP was produced from the starting material of 3-O-tosyl-2-deoxyadenosine which was treated with sodium methoxide in dimethylformamide to produce 2,3-dideoxy-2,3-didehydroadenosine, which is an unsaturated compound [4]. The double bond between carbon 2 and 3 of the cyclic ether was then hydrogenated with a palladium-on-carbon catalyst to give 2,3-dideoxyadenosine (ddA). The ddA (ddA) was then phosphorylated in order add the triphosphate group. Purification then took place on DEAE-Sephadex column using a gradient of triethylamine carbonate at pH 8.4. Figure 2 is schematic representation to produce ddA prior to phosphorylation. In the preparation of ddTTP (Figure 3), thymidine was tritylated (+C(Ph3)) at the 5-position and a methanesulphonyl (+CH3SO2) group was introduced at the 3-OH group[5]. The methanesulphonyl group was substituted with iodine by refluxing the compound in 1,2-dimethoxythane in the presence of NaI. After chromatography on a silica column the 5-trityl-3-iodothymidine was hydrogenated in 80% acetic acid to remove the trityl group. The resultant 3-iodothymidine was hydrogenated to produce 23-dideoxythymidine which subsequently was phosphorylated. Once phosphorylated, ddTTP was then purified on a DEAE-sephadex column with triethylammonium-hydrogen carbonate gradient. Figure 3 is a schematic representation to produce ddT prior phosphorylation. When preparing ddGTP, the starting material was N-isobutyryl-5-O-monomethoxytrityldepxyguanosine [1]. After the tosylation of the 3-OH group the compound was then converted to the 23-didehydro derivative with sodium methoxide. Then the isobutyryl group was partly removed during this treatment of sodium methoxide and was removed completely by incubation in the presence of NH3 overnight at 45oC. During the overnight incubation period, the didehydro derivative was reduced to the dideoxy derivative and then converted to the triphosphate. The triphosphate was purified by the fractionation on a DEAE-Sephadex column using a triethylamine carbonate gradient. Figure 4 is a schematic representation to produce ddG prior phosphorylation. Preparing the ddCTP was similar to ddGTP, but was prepared from N-anisoyl-5-O-monomethoxytrityldeoxycytidine. However the purification process was omitted for ddCTP, as it produced a very low yield, therefore the solution was used directly in the experiment described in the paper [2]. Figure 5 is a schematic representation to produce ddC prior phosphorylation. With the four dideoxy samples now prepared, the sequencing procedure can now commence. The dideoxy samples are in separate tubes, along with restriction enzymes obtained from ?X174 replicative form and the four dNTPs [2]. The restriction enzymes and the dNTPs begin strand synthesis and the ddNTP is incorporated to the growing polynucleotide and terminates further strand synthesis. This is due to the lack of the hydroxyl group at the 3 position of ddNTP which prevents the next nucleotide to attach onto the strand. The four tubes are separate by gel-electrophoresis on acrylamide gels (see Gel-Electrophoresis). Figure 6 shows the sequencing procedure. Reading the sequence is straightforward [1]. The first band that moved the furthest is located, this represents the smallest piece of DNA and is the strand terminated by incorporation of the dideoxynucleotide at the first position in the template. The track in which this band occurs is noted. For example (shown in Figure 6), the band that moved the furthest is in track A, so the first nucleotide in the sequence is A. To find out what the next nucleotide, the next most mobile band corresponding to DNA molecule which is one nucleotide longer than the first, and in this example, the band is on track T. Therefore the second nucleotide is T, and the overall sequence so far is AT. The processed is carried on along the autoradiograph until the individual bands start to close in and become inseparable, therefore becoming hard to read. In general it is possible to read upto 400 nucleotides from one autoradiograph with this method. Figure 7 is a schematic representation of an autoradiograph. E ver since Sanger perfected the method of DNA sequencing, there have been advances methods of sequencing along with the achievements. Certain achievements such as the Human genome project and shall be discussed later on in this review. Gel-Electrophoresis Gel-Electrophoresis is defined as the movement of charged molecules in an electric field [1][8]. DNA molecules, like many other biological compounds carry an electric charge. With the case of DNA, this charge is negative. Therefore when DNA is placed in an electric field, they migrate towards the positive pole (as shown in figure 8). There are three factors which affect the rate of migration, which are shape, electrical charge and size. The polyacrylamide gel comprises a complex network of pores through which the molecules must travel to reach the anode. Maxam and Gilbert Method The Maxam and Gilbert method was proposed before Sanger Method in the same year. While the Sangers method involves enzymatic radiolabelled fragments from unlabelled DNA strands [2]. The Maxam-Gilbert method involves chemical cleavage of prelabelled DNA strands in four different ways to form the four different collections of labelled fragments [6][7]. Both methods use gel-electrophoresis to separate the DNA target molecules [8]. However Sangers Chain Termination method has been proven to be simpler and easier to use than the Maxam and Gilbert method [9]. As a matter of fact, looking through the literature text books, Sangers method of DNA sequencing have been explained rather than Maxam and Gilberts [1][3][9][10]. With Maxam and Gilberts method there are two chemical cleavage reactions that take place [6][7]. One of the chemical reaction take places with guanine and the adenine, which are the two purines and the other cleaves the DNA at the cytosine and thymin e, the pyrimidines. For the cleavage reaction, specific reagents are used for each of the reaction. The purine specific reagent is dimethyl sulphate and the pyrimidine specific reagent is hydrazine. Each of these reactions are done in a different way, as each of the four bases have different chemical properties. The cleavage reaction for the guanine/adenine involves using dimethyl sulphate to add a methyl group to the guanines at the N7 position and at the N3 position at the adenines [7]. The glycosidic bond of a methylated adenines is unstable and breaks easily on heating at neutral pH, leaving the sugar free. Treatment with 0.1M alkali at 90oC then will cleave the sugar from the neighbouring phosphate groups. When the resulting end-labelled fragments are resolved on a polyacrylamide gel, the autoradiograph contains a pattern a dark and light bands. The dark bands arise from the breakage at the guanines, which methylate at a rate which is 5-fold faster than adenines. From this reac tion the guanine appear stronger than the adenosine, this can lead to a misinterpretation. Therefore an Adenine-Enhanced cleavage reaction takes place. Figure 9 shows the structural changes of guanine when undergoing the structural modifications involved in Maxam-Gilbert sequencing. With an Adenine-Enhanced cleavage, the glycosidic bond of methylated adenosine is less stable than that of methylated guanosine, thus gentle treatment with dilute acid at the methylation step releases the adenine, allowing darker bands to appear on the autoradiograph [7]. The chemical cleavage for the cytosine and thymine residues involves hydrazine instead of dimethyl sulphate. The hydrazine cleaves the base and leaving ribosylurea [7]. After partial hydrazinolysis in 15-18M aqueous hydrazine at 20oC, the DNA is cleaved with 0.5M piperidine. The piperidine (a cyclic secondary amine), as the free base, displaces all the products of the hydrazine reaction from the sugars and catalyzses the b-elimination of the phosphates. The final pattern contains bands of the similar intensity from the cleavages at the cytosines and thymines. As for cleavage for the cytosine, the presence of 2M NaCl preferentially suppresses the reaction of thymine with hydrazine. Once the cleavage reaction has taken place each original strand is broken into a labelled fragment and an unlabelled fragment [7]. All the labelled fragments start at the 5 end of the strand and terminate at the base that precedes the site of a nucleotide along the original strand. Only the labelled fragmen ts are recorded on the gel electrophoresis. Dye-labelled terminators For many years DNA sequencing has been done by hand, which is both laborious and expensive[3]. Before automated sequencing, about 4 x 106 bases of DNA had been sequenced after the introduction of the Sangers method and Maxam Gilbert methods [11]. In both methods, four sets of reactions and a subsequent electrophoresis step in adjacent lanes of a high-resolution polyacrylamide gel. With the new automated sequencing procedures, four different fluorophores are used, one in each of the base-specific reactions. The reaction products are combined and co-electrophoresed, and the DNA fragments generated in each reaction are detected near the bottom of the gel and identified by their colour. As for choosing which DNA sequencing method to be used, Sangers Method was chosen. This is because Sangers method has been proven to be the most durable and efficient method of DNA sequencing and was the choice of most investigators in large scale sequencing [12]. Figure 10 shows a typical sequence is ge nerated using an automated sequencer. The selection of the dyes was the central development of automated DNA sequencing [11]. The fluorophores that were selected, had to meet several criteria. For instance the absorption and emission maxima had to be in the visible region of the spectrum [11] which is between 380 nm and 780 nm [10], each dye had to be easily distinguishable from one another [11]. Also the dyes should not impair the hybridisation of the oligonucleotide primer, as this would decrease the reliability of synthesis in the sequencing reactions. Figure 11 shows the structures of the dyes which are used in a typical automated sequencing procedure, where X is the moiety where the dye will be bound to. Table 1 shows which dye is covalently attached to which nucleotide in a typical automated DNA sequencing procedure Dye Nucleotide Attached Flourescein Adenosine NBD Thymine Tetramethylrhodamine Guanine Texas Red Cytosine In designing the instrumentation of the florescence detection apparatus, the primary consideration was sensitivity. As the concentration of each band on the co-electrophoresis gel is around 10 M, the instrument needs to be capable of detecting dye concentration of that order. This level of detection can readily be achieved by commercial spectrofluorimeter systems. Unfortunately detection from a gel leads to a much higher background scatter which in turn leads to a decrease in sensitivity. This is solved by using a laser excitation source in order to obtain maximum sensitivity [11]. Figure 12 is schematic diagram of the instrument with the explanation of the instrumentation employed. When analyzing data, Hood had found some complications [11]. Firstly the emission spectra of the different dyes overlapped, in order to overcome this, multicomponent analysis was employed to determine the different amounts of the four dyes present in the gel at any given time. Secondly, the different dye molecules impart non-identical electrophoretic mobilities to the DNA fragments. This meant that the oligonucleotides were not equal base lengths. The third major complication was in analyzing the data comes from the imperfections of the enzymatic methods, for instance there are often regions of the autoradiograph that are difficult to sequence. These complications were overcome in five steps [11] High frequency noise is removed by using a low-pass Fourier filter. A time delay (1.5-4.5 s) between measurements at different wavelength is partially corrected for by linear interpolation between successive measurements. A multicomponent analysis is performed on each set of four data points; this computation yields the amount of each of the four dyes present in the detector as a function of time. The peaks present in the data are located The mobility shift introduced by the dyes is corrected for using empirical determined correction factors. Since the publication of Hoods proposal of the fluorescence detection in automated DNA sequence analysis. Research has been made on focussed on developing which are better in terms of sensitivity [12]. Bacterial and Viral Genome Sequencing (Shotgun Sequencing) Prior to 1995, many viral genomes have been sequenced using Sangers chain termination technique [13], but no bacterial genome has been sequenced. The viral genomes that been sequenced are the 229 kb genome of cytomegalovirus [14], and the 192 kb genome of vaccinia [15], the 187 kb mitochondrial and 121 kb cholorophast genomes of Marchantia polymorpha have been sequenced [16]. Viral genome sequencing has been based upon the sequencing of clones usually derived from extensively mapped restriction fragments, or ? or cosmid clones [17]. Despite advances in DNA sequencing technology, the sequencing of genomes has not progressed beyond clones on the order of the size of the ~ 250kb, which is due to the lack of computational approaches that would enable the efficient assembly of a large number of fragments into an ordered single assembly [13][17]. Upon this, Venter and Smith in 1995 proposed Shotgun Sequencing and enabled Haemophilus influenzae (H. influenzae) to become the first bacterial genome to be sequenced [13][17]. H. influenzae was chosen as it has a similar base composition as a human does with 38 % of sequence made of G + C. Table 2 shows the procedure of the Shotgun Sequencing [17]. When constructing the library ultrasonic waves were used to randomly fragment the genomic DNA into fairly small pieces of about the size of a gene [13]. The fragments were purified and then attached to plasmid vectors[13][17]. The plasmid vectors were then inserted into an E. coli host cell to produce a library of plasmid clones. The E. coli host cell strains had no restriction enzymes which prevented any deletions, rearrangements and loss of the clones [17]. The fragments are randomly sequenced using automated sequencers (Dye-Labelled terminators), with the use of T7 and SP6 primers to sequence the ends of the inserts to enable the coverage of fragments by a factor of 6 [17]. Table 2 (Reference 17) Stage Description Random small insert and large insert library construction Shear genomic DNA randomly to ~2 kb and 15 to 20 kb respectively Library plating Verify random nature of library and maximize random selection of small insert and large insert clones for template production High-throughput DNA sequencing Sequence sufficient number of sequences fragments from both ends for 6x coverage Assembly Assemble random sequence fragments and identity repeat regions Gap Closure Physical gaps Order all contigs (fingerprints, peptide links, ÃŽ », clones, PCR) and provide templates for closure Sequence gaps Complete the genome sequence by primer walking Editing Inspect the sequence visually and resolve sequence ambiguities, including frameshifts Annotation Identify and describe all predicted coding regions (putative identifications, starts and stops, role assignments, operons, regulatory regions) Once the sequencing reaction has been completed, the fragments need to be assembled, and this process is done by using the software TIGR Assembler (The Institute of Genomic Research) [17]. The TIGR Assembler simultaneously clusters and assembles fragments of the genome. In order to obtain the speed necessary to assemble more than 104 fragments [17], an algorithm is used to build up the table of all 10-bp oligonucleotide subsequences to generate a list of potential sequence fragment overlaps. The algorithm begins with the initial contig (single fragment); to extend the contig, a candidate fragment is based on the overlap oligonucleotide content. The initial contig and candidate fragment are aligned by a modified version of the Smith-Waterman [18] algorithm, which allows optional gapped alignments. The contig is extended by the fragment only if strict criteria of overlap content match. The algorithm automatically lowers these criteria in regions of minimal coverage and raises them in r egions with a possible repetitive element [17]. TIGR assembler is designed to take advantage of huge clone sizes [17]. It also enforces a constraint that sequence from two ends of the same template point toward one another in the contig and are located within a certain range of the base pair [17]. Therefore the TIGR assembler provides the computational power to assemble the fragments. Once the fragments have been aligned, the TIGR Editor is used to proofread the sequence and check for any ambiguities in the data [17]. With this technique it does required precautionary care, for instance the small insert in the library should be constructed and end-sequenced concurrently [17]. It is essential that the sequence fragments are of the highest quality and should be rigorously check for any contamination [17]. Pyrosequencing Most of the DNA sequencing required gel-electrophoresis, however in 1996 at the Royal Institute of Technology, Stockholm, Ronaghi proposed Pyrosequencing [19][20]. This is an example of sequencing-by-synthesis, where DNA molecules are clonally amplified on a template, and this template then goes under sequencing [25]. This approach relies on the detection of DNA polymerase activity by enzymatic luminometric inorganic pyrophosphate (PPi) that is released during DNA synthesis and goes under detection assay and offers the advantage of real-time detection [19]. Ronaghi used Nyren [21] description of an enzymatic system consisting of DNA polymerase, ATP sulphurylase and lucifinerase to couple the release of PPi obtained when a nucleotide is incorporated by the polymerase with light emission that can be easily detected by a luminometer or photodiode [20]. When PPi is released, it is immediately converted to adenosine triphosphate (ATP) by ATP sulphurylase, and the level of generated ATP is sensed by luciferase-producing photons [19][20][21]. The unused ATP and deoxynucleotide are degraded by the enzyme apyrase. The presence or absence of PPi, and therefore the incorporation or nonincorporation of each nucleotide added, is ultimately assessed on the basis of whether or not the photons are detected. There is minimal time lapse between these events, and the conditions of the reaction are such that iterative addition of the nucleotides and PPi detection are possible. The release of PPi via the nucleotide incorporation, it is detected by ELIDA (Enzymatic Luminometric Inorganic pyrophosphate Detection Assay) [19][21]. It is within the ELIDA, the PPi is converted to ATP, with the help of ATP sulfurylase and the ATP reacts with the luciferin to generate the light at more than 6 x 109 photons at a wavelength of 560 nm which can be detected by a photodiode, photomultiplier tube, or charge-coupled device (CCD) camera [19][20]. As mentioned before, the DNA molecules need to be amplified by polymerase chain reaction (PCR which is discussed later Ronaghi observed that dATP interfered with the detection system [19]. This interference is a major problem when the method is used to detect a single-base incorporation event. This problem was rectified by replacing the dATP with dATPaS (deoxyadenosine a–thiotrisulphate). It is noticed that adding a small amount of the dATP (0.1 nmol) induces an instantaneous increase in the light emission followed by a slow decrease until it reached a steady-state level (as Figure 11 shows). This makes it impossible to start a sequencing reaction by adding dATP; the reaction must instead be started by addition of DNA polymerase. The signal-to-noise ratio also became higher for dATP compared to the other nucleotides. On the other hand, addition of 8 nmol dATPaS (80-fold higher than the amount of dATP) had only a minor effect on luciferase (as Figure 14 shows). However dATPaS is less than 0.05% as effective as dATP as a substrate for luciferase [19]. Pyrosequencing is adapted by 454 Life Sciences for sequencing by synthesis [22] and is known as the Genome Sequencer (GS) FLX [23][24]. The 454 system consist of random ssDNA (single-stranded) fragments, and each random fragment is bound to the bead under conditions that allow only one fragment to a bead [22]. Once the fragment is attached to the bead, clonal amplification occurs via emulsion. The emulsified beads are purified and placed in microfabricated picolitre wells and then goes under pyrosequencing. A lens array in the detection of the instrument focuses luminescene from each well onto the chip of a CCD camera. The CCD camera images the plate every second in order to detect progression of the pyrosequencing [20][22]. The pyrosequencing machine generates raw data in real time in form of bioluminescence generated from the reactions, and data is presented on a pyrogram [20] Sequencing by Hybridisation As discussed earlier with chain-termination, Maxamm and Gilbert and pyrosequencing, these are all direct methods of sequencing DNA, where each base position is determined individually [26]. There are also indirect methods of sequencing DNA in which the DNA sequence is assembled based on experimental determination of oligonucleotide content of the chain. One promising method of indirect DNA sequencing is called Sequencing by Hybridisation in which sets of oligonucleotide probes are hybridised under conditions that allow the detection of complementary sequences in the target nucleic acid [26]. Sequencing by Hybridisation (SBH) was proposed by Drmanac et al in 1987 [27] and is based on Dotys observation that when DNA is heated in solution, the double-strand melts to form single stranded chains, which then re-nature spontaneously when the solution is cooled [28]. This results the possibility of one piece of DNA recognize another. And hence lead to Drmanac proposal of oligonucleotides pro bes being hybridised under these conditions allowing the complementary sequence in the DNA target to be detected [26][27]. In SBH, an oligonucleotide probe (n-mer probe where n is the length of the probe) is a substring of a DNA sample. This process is similar to doing a keyword search in a page full of text [29]. The set of positively expressed probes is known as the spectrum of DNA sample. For example, the single strand DNA 5GGTCTCG 3 will be sequenced using 4-mer probes and 5 probes will hybridise onto the sequence successfully. The remaining probes will form hybrids with a mismatch at the end base and will be denatured during selective washing. The five probes that are of good match at the end base will result in fully matched hybrids, which will be retained and detected. Each positively expressed serves as a platform to decipher the next base as is seen in Figure 16. For the probes that have successfully hybridised onto the sequence need to be detected. This is achieved by labelling the probes with dyes such as Cyanine3 (Cy3) and Cyanine5 (Cy5) so that the degree of hybridisation can be detected by imaging devices [29]. SBH methods are ideally suited to microarray technology due to their inherent potential for parallel sample processing [29]. An important advantage of using of using a DNA array rather than a multiple probe array is that all the resulting probe-DNA hybrids in any single probe hybridisation are of identical sequence [29]. One of main type of DNA hybridisation array formats is oligonucleotide array which is currently patented by Affymetrix [30]. The commercial uses of this shall be discussed under application of the DNA Array (Affymetrix). Due to the small size of the hybridisation array and the small amount of the target present, it is a challenge to acquire the signals from a DNA Array [29]. These signals must first be amplified b efore they can be detected by the imaging devices. Signals can be boosted by the two means; namely target amplification and signal amplification. In target amplification such as PCR, the amount of target is increased to enhance signal strength while in signal amplification; the amount of signal per unit is increased. Nanopore Sequencing Nanopore sequencing was proposed in 1996 by Branton et al, and shows that individual polynucleotide molecules can be characterised using a membrane channel [31]. Nanopore sequencing is an example of single-molecule sequencing, in which the concept of sequencing-by-synthesis is followed, but without the prior amplification step [24]. This is achieved by the measurement of ionic conductance of a nucleotide passing through a single ion channels in biological membranes or planar lipid bilayer. The measurement of ionic conductance is routine neurobiology and biophysics [31], as well as pharmacology (Ca+ and K+ channel)[32] and biochemistry[9]. Most channels undergo voltage-dependant or ligand dependant gating, there are several large ion channels (i.e. Staphylococcus aureus a-hemolysin) which can remain open extended periods, thereby allowing continuous ionic current to flow across a lipid bilayer [31]. If a transmembrane voltage applied across an open channel of appropriate size should d raw DNA molecules through the channel as extended linear chains whose presence would detect reduce ionic flow. It was assumed, that the reduction in the ionic flow would lead to single channel recordings to characterise the length and hence lead to other characteristics of the polynucleotide. In the proposal by Branton, a-hemolysin was used to form a single channel across a lipid bilayer separating two buffer-filled compartment [31]. a-Hemolysin is a monomeric, 33kD, 293 residue protein that is secreted by the human pathogen Staphylococcus aureus [33]. The nanopore are produced when a-hemolysin subsunits are introduced into a buffered solution that separates lipid bilayer into two compartments (known as cis and trans): the head of t