Monday, September 30, 2019

More Than a Veil

More Than a Veil A Feminist Readings of Marjane Satrapi’s Persepolis Cultural differences have been on the foreground of the ongoing struggle between the United States and Iran since the 1970’s. Stereotypes are built on misunderstandings which can prove costly in international relationships. Our national media coverage of Iran portrays radical Islamic men oppressing their female counterparts. Many American citizens have narrow opinions on Iranian women, most of them dealing with the infamous veil that Islamic girls wear females.Marjane Satrapi in her biographical novel Persepolis examines Iranian women’s roles in the Islamic Revolution, breaks the myth of the oppressing veil, and demonstrates how Iranian boys and girls are socially constructed. Satrapi does all of this with a nontraditional writing style as she challenges the more common coming of manhood tale called a Bildungsroman (Barry p. 129) with her own coming of womanhood narrative. In America it is widel y believed that women in Iran are to be seen and not heard. That Iran is controlled by an extreme patriarchy where women voice no opinions on social issues.However, we see in Persepolis that Marjane comes from a family with strong women like her mother and grandmother. Her mother routinely takes part in protesting alongside her husband in the streets of Tehran. (Satrapi p. 18) Marjane’s mother is an example of the misconception that women in Iran are subjects. Marjane’s mother illustrates to us how women all across Iran were active during the Islamic Revolution, as protestors, collaborators, or victims. (Botshon p. 5) Agency is not just shown in adult women in Persepolis but also in adolescent girls.Many Americans are quick to point out the veil which covers an Islamic women’s face as a sign of the extreme patriarchy in Iran. However, in the beginning of Persepolis we see Marjane as a child and other little girls taking their veils off at school to use them for games like jump rope. (Satrapi p. 3) This imagery immediately shatters our connotations of disciplined Iranian girls and focuses us more on the playful resistance which the school girls demonstrate. This rebellious nature of Marjane does not stop in childhood despite the oppressive agenda of the school board.Marjane’s self-expression continues as a teenager when she adopts American culture ideas like punk rock clothing and even owning a Kim Wilde and Iron Maiden poster, which her parent smuggled in from Turkey. (Satrapi p. 127-129) In all of these scenes Marjane is drawn on the pages of the novel without having her veil on. These scenes are an example of how some girls were not submissive to Islamic rule as is it often depicted in our own media. Even though women had proactive roles in the Islamic Revolution they were still constructed and treated differently in Iranian culture.Marjane’s mother speaks of the violent soldiers she had encounter in the streets of Tehran o ne day when she was caught not wearing the mandatory veil; â€Å"They insulted me. They said that women like me should be pushed up against a wall and fucked. And then thrown in the garbage†¦And that if I didn’t want that to happen, I should wear the veil. †(Satrapi p. 74) In this scene it is clear that the Islamic regime agenda is to suppress Iranian women’s individuality, but how come these military men are so violent?The answer may be in the way that girls and boys were socially constructed during the Islamic Revolution. In Iranian culture it is common for boys to learn military values at school while girls would learn more suitable household skills like knitting and sewing so that they could make winter hoods for the soldiers. At a young age boys are taught to be soldiers and take part in war while girls are helping war efforts indirectly. Aggression in boys to some people may seem natural; however, in Iran young boys are being taught this social trait. The veil itself is a way that Islam fundamentalist try to construct their women into being oppressed and submissive. The wearing of the veil is enforced by school officials who have an Islamic agenda, however; many girls are taught contradictory ideas about the veil by their parents at home. Marjane would have been more susceptible to Islam fundamentalists if she did not come from a family with strong independent female figures. Satrapi demonstrates clearly that gender roles are taught in institutions like religion and school and are not natural.Even more importantly Satrapi writes about how she rebelled against these norms, which makes Persepolis an original narrative of growing up as a girl in Iran. Persepolis in its roots is a personal female memoir of Marjane Satrapi’s growth into womanhood while being raised in Iran during the Islamic Revolution. The story of Marjane Satrapi’s life cannot be duplicated by another author. Marjane grew up in a confusing time where c omplex issues of religions, politics, and class formed an authentic female version of a classic Bildungsroman tale.Satrapi’s Persepolis questions western thought about Iranian women. Without Marjane Satrapi’s personal experience it is easy to believe that a similar Islamic Revolution tale told by a female protagonist would focus on the hardships of being oppressed and not the variety of social classes that depict rebellious Iranian women. Without Marjane Satrapi, Persepolis could have had an unoriginal western stereotypical story about Iranian women. Marjane Satrapi literally makes herself the central character as the author.Persepolis as a feminist work shows the value of women in Iranian society, the social construction of girls and boys, and the complex issues in Marjane’s life which are reflected in her work. Many misconceptions about Iranian women are dismissed in Persepolis. Satrapi shows Iranian women as agents with a cause rather than subjects with no vo ice. Although we are use to the typical submissive Iranian women waiting for liberation, Satrapi blows this belief up for western reader. Marjane Satrapi’s Persepolis humanizes the Iranian female population which is all too often illustrated in United States’ media as being oppressed by a veil.Works Cited Babak. Elahi. Frames and Mirrors in Marjane Satrapi’s Persepolis. University Nebraska Press. Vo. 15 No. 1-2. 2007. 312-325. Article. Barry. Peter. Beginning Theory: An Introduction to Literary and Cultural Theory. 3rd ed. Manchester. Manchester University Press. 2009 Print. Botshon. Lisa. Plastas. Melinda. Homeland In/Security: A Discussion and Workshop on Teaching Marjane Satrapi’s Persepolis. University of Illinois Press. Feminist Teacher, Vol 20. No. 1. 2009. 1-14. Article. Satrapi. Marjane. The Complete Persepolis. New York. Pantheon Books. 2007. Print.

Sunday, September 29, 2019

Yahoo Reaction Paper

On the 3rd of May, 2012, Daniel Loeb, Yahoo's largest external shareholder, who then controlled 5. % of the company through his hedge fund – Third point, launched an attack on Yahoo! And its new C. E. O. , alleging that Scott Thompson had lied on his resume about his academic qualifications. This was a result of a proxy war between Yahoo! And Loeb, who being a major stakeholder, wanted his choice Of candidates on the board and saw Thompson as an obstacle. These allegations snowballed into a huge crisis during a trying period for the organization.After rejecting a profitable takeover bid from Microsoft, steep competition from other internet giants and top level management issues, this situation weakened the company rather. Thompson, in his resume, claimed to have a college degree in accounting and computer science from Stonewall College near Boston. This â€Å"claim† was published on the company's bio and annual report, a legal document whose validity and authenticity is confirmed by the CEO- He even certified these degrees in the Securities and Exchange Commission filings.After receiving Lobe's letter stating that Thompson only had an accounting degree from Stonewall and the college didn't even offer a computer science degree at the time, Yahoo! Initiated an investigation. Upon receiving the findings of the investigation, Yahoo! Encoded that Thompson in fact only had an accounting degree and called the mistake an â€Å"inadvertent reproof. F-judging information on ones resume is something that many people indulge in, in order to make their profiles appealing especially in competitive job markets, a place where they fear loss to other capable candidates.Scott Thompson probably didn't need to lie about this particular qualification as he was in fact more than capable to lead Yahoo! Given his past experiences in technology firms like Papal and Visa. In my opinion, one of the most important methods of moral reasoning that one must adopt while making any professional or even personal decision, is the Rawlins Liberalism moral method. As Minnie Moldavia rightly suggests, one should keep in mind that the decisions you make could eventually decide your social position in the future.This future could not just have a positive or negative effect on you, but also the others who depend on and matter to you. Had Scott Thompson followed this method while making his resume as opposed to just a consequentiality approach, it is very likely he wouldn't have found himself in such a controversy. This saga did not just affect him, but also the organization, its shareholders and employees. Since Loeb first revealed Thompson padded resume, Yahoo's shares fell by around 3%. Since his tenure began, Thompson began to cut costs by laying off almost 14% of the Yahoo! Rockford, most of whom were in fact engineers and computer science graduates. Although the â€Å"Resume-Gate† seemed to some a minor error blown out of proportion, several disgruntle d Silicon Valley employees questioned how they could work for an organization where the C. E. O. Claimed to be a computer scientist and actually wasn't. Employee and share-holder morale was at an all-time low, a situation caused by a decision made many years ago, which Thompson rabble thought would never come back to haunt him. Thompson is not alone. There have been other C. E.Co's who have lied about their credentials in the past and some have almost got away with it. Ronald Carmella, C. E. O of Bausch & Lomb, admitted to his mistakes and retained his position. Others like David Edmondson from Radiograms haven't been so lucky. In the name of marketing or branding themselves, people believe they can attract aspirations jobs and seem appealing to employers. More often than not, people do not need these little lies to achieve success or the job of their dreams. David Edmondson, for example, had climbed up the company ladder and had become C. E. O. Cause of his ability and skillet, not because of the degree he showcased on his resume. Radiograms may have been too harsh when they implemented their decision, but it was definitely for the long term stability of the company. From the observations so far, understand that the active agents are the board of directors at Yahoo, Scott Thompson and Daniel Loeb. Their decisions will affect the passive agents I. E. The shareholders and the employees. So were Yahoo! And Radiograms justified in asking their prized possessions to move on? As a decision maker, the questions one must ask, according to Graham Tucker, are as follows.Is the decision Profitable? On firing Scott Thompson without cause, Yahoo! Would have to pay him a huge severance fee and stock grants of up to $million. This would seem a huge compensation and a loss for the organization in the short term, but could definitely seem profitable in the near future as the stock prices were bound to increase, which they did upon Thompson resignation. Tucker also asks if the decision is legal. The answer to that is also yes. According to the Serbians-Solely Act of 2002, violators face penalties of 20 years in prison and nines of up to $million if the data submitted to the SEC isn't authentic.Fairness of the decision is another question that Tucker asks. According to the Yahoo's code of ethics, all employees are expected to disclose fair, accurate, timely and understandable information in reports and documents filed to the S. E. C. This applies even to directors. It would be unfair to other employees if such conduct ignored what the top level management was up to. The decision of firing Thompson was also the right decision as this would not just set a strict precedent at Yahoo, but it would also salvage the company from a trying tuition.No company would want their leader lying about anything let alone something as petty as three words on their resume. Trust issues creep in and shareholders could question the transparency and openness the company has to offer. Lastly, Tucker asks if the decision taken would ensure further sustainable development. I personally believe that during this predicament, in spite of multiple changes in management a few years earlier, a good change would benefit Yahoo's future growth. A situation like this sets a bad tone at the top and beginning afresh would uplift employee and shareholder morale.

Saturday, September 28, 2019

The Human Genome Project

The Human Genome Project (HGP) is a project undertaken with a goal to understand the genetic make-up of the human species by determining the DNA sequence of the human genome and the genome of a few model organisms. The project began in 1990 and, by some definitions, it was completed in 2003. It was one of the biggest investigational projects in the history of science. The mapping of the human genes was an important step in the development of medicines and other aspects of health care.Most of the genome DNA sequencing for the Human Genome Project was done by researchers at universities and research centers in the the United States and Great Britain, with other genome DNA sequencing done independently by the private company Celera Genomics. The HGP was originally aimed at the more than three billion nucleotides contained in a haploid reference human genome. Recently several groups have announced efforts to extend this to diploid human genomes including the International HapMap Project, Applied Biosystems, Perlegen, Illumina, JCVI, Personal Genome Project, and Roche-454.The â€Å"genome† of any given individual (except for identical twins and cloned animals) is unique; mapping â€Å"the human genome† involves sequencing multiple variations of each gene. The project did not study all of the DNA found in human cells; some heterochromatic areas (about 8% of the total) remain un-sequenced. International HGP Initiation of the Project was the culmination of several years of work supported by the Department of Energy, in particular workshops in 1984 [1] and 1986 and a subsequent initiative the Department of Energy. 2] This 1986 report stated boldly, â€Å"The ultimate goal of this initiative is to understand the human genome† and â€Å"Knowledge of the human genome is as necessary to the continuing progress of medicine and other health sciences as knowledge of human anatomy has been for the present state of medicine. † Candidate technologies w ere already being considered for the proposed undertaking at least as early as 1985. [3] James D. Watson was Head of the National Center for Human Genome Research at the National Institutes of Health (NIH) in the United States starting from 1988.Largely due to his disagreement with his boss, Bernadine Healy, over the issue of patenting genes, he was forced to resign in 1992. He was replaced by Francis Collins in April 1993, and the name of the Center was changed to the National Human Genome Research Institute (NHGRI) in 1997. The $3-billion project was formally founded in 1990 by the United States Department of Energy and the U. S. National Institutes of Health, and was expected to take 15 years. In addition to the United States, the international consortium comprised geneticists in China, France, Germany, Japan, and the United Kingdom.Due to widespread international cooperation and advances in the field of genomics (especially in sequence analysis), as well as major advances in com puting technology, a ‘rough draft' of the genome was finished in 2000 (announced jointly by then US president Bill Clinton and British Prime Minister Tony Blair on June 26, 2000). [4] Ongoing sequencing led to the announcement of the essentially complete genome in April 2003, 2 years earlier than planned. [5] In May 2006, another milestone was passed on the way to completion of the project, when the sequence of the last chromosome was published in the journal Nature. 6] There are multiple definitions of the â€Å"complete sequence of the human genome†. According to some of these definitions, the genome has already been completely sequenced, and according to other definitions, the genome has yet to be completely sequenced. There have been multiple popular press articles reporting that the genome was â€Å"complete. † The genome has been completely sequenced using the definition employed by the International Human Genome Project. A graphical history of the human ge nome project shows that most of the human genome was complete by the end of 2003.However, there are a number of regions of the human genome that can be considered unfinished. First, the central regions of each chromosome, known as centromeres, are highly repetitive DNA sequences that are difficult to sequence using current technology. The centromeres are millions (possibly tens of millions) of base pairs long, and for the most part these are entirely un-sequenced. Second, the ends of the chromosomes, called telomeres, are also highly repetitive, and for most of the 46 chromosome ends these too are incomplete.We do not know precisely how much sequence remains before we reach the telomeres of each chromosome, but as with the centromeres, current technology does not make it easy to get there. Third, there are several loci in each individual's genome that contain members of multigene families that are difficult to disentangle with shotgun sequencing methodologies – these multigen e families often encode proteins important for immune functions. It is likely that the centromeres and telomeres will remain un-sequenced until new technology is developed that facilitates their sequencing.Other than these regions, there remain a few dozen gaps scattered around the genome, some of them rather large, but there is hope that all these will be closed in the next couple of years. In summary: our best estimates of total genome size indicate that about 92% of the genome has been completed . Most of the remaining DNA is highly repetitive and unlikely to contain genes, but we cannot truly know until we sequence all of it. Understanding the functions of all the genes and their regulation is far from complete.The roles of junk DNA, the evolution of the genome, the differences between individuals, and many other questions are still the subject of intense study by laboratories all over the world. Goals The goals of the original HGP were not only to determine more than 3 billion base pairs in the human genome with a minimal error rate, but also to identify all the genes in this vast amount of data. This part of the project is still ongoing, although a preliminary count indicates about 30,000 genes in the human genome, which is fewer than predicted by many scientists.Another goal of the HGP was to develop faster, more efficient methods for DNA sequencing and sequence analysis and the transfer of these technologies to industry. The sequence of the human DNA is stored in databases available to anyone on the Internet. The U. S. National Center for Biotechnology Information (and sister organizations in Europe and Japan) house the gene sequence in a database known as Genbank, along with sequences of known and hypothetical genes and proteins.Other organizations such as the University of California, Santa Cruz[1], and Ensembl[2] present additional data and annotation and powerful tools for visualizing and searching it. Computer programs have been developed to analy ze the data, because the data themselves are difficult to interpret without such programs. The process of identifying the boundaries between genes and other features in raw DNA sequence is called genome annotation and is the domain of bioinformatics.While expert biologists make the best annotators, their work proceeds slowly, and computer programs are increasingly used to meet the high-throughput demands of genome sequencing projects. The best current technologies for annotation make use of statistical models that take advantage of parallels between DNA sequences and human language, using concepts from computer science such as formal grammars. Another, often overlooked, goal of the HGP is the study of its ethical, legal, and social implications.It is important to research these issues and find the most appropriate solutions before they become large dilemmas whose effect will manifest in the form of major political concerns. All humans have unique gene sequences; therefore the data p ublished by the HGP does not represent the exact sequence of each and every individual's genome. It is the combined genome of a small number of anonymous donors. The HGP genome is a scaffold for future work in identifying differences among individuals. Most of the current effort in identifying differences among individuals involves single nucleotide polymorphisms and the HapMap.How it was accomplished Funding came from the US government through the National Institutes of Health in the United States, and the UK charity, the Wellcome Trust, who funded the Sanger Institute (then the Sanger Centre) in Great Britain, as well as numerous other groups from around the world. The genome was broken into smaller pieces; approximately 150,000 base pairs in length. These pieces are called â€Å"bacterial artificial chromosomes†, or BACs, because they can be inserted into bacteria where they are copied by the bacterial DNA replication machinery.Each of these pieces was then sequenced separ ately as a small â€Å"shotgun† project and then assembled. The larger, 150,000 base pairs go together to create chromosomes. This is known as the â€Å"hierarchical shotgun† approach, because the genome is first broken into relatively large chunks, which are then mapped to chromosomes before being selected for sequencing. Celera Genomics HGP In 1998, a similar, privately funded quest was launched by the American researcher Craig Venter and his firm Celera Genomics.The $300 million Celera effort was intended to proceed at a faster pace and at a fraction of the cost of the roughly $3 billion publicly funded project. Celera used a riskier technique called whole genome shotgun sequencing, which had been used to sequence bacterial genomes of up to six million base pairs in length, but not for anything nearly as large as the three thousand million base pair human genome. Celera initially announced that it would seek patent protection on â€Å"only 200-300† genes, but later amended this to seeking â€Å"intellectual property protection† on â€Å"fully-characterized important structures† amounting to 100-300 targets.The firm eventually filed preliminary (â€Å"place-holder†) patent applications on 6,500 whole or partial genes. Celera also promised to publish their findings in accordance with the terms of the 1996 â€Å"Bermuda Statement,† by releasing new data quarterly (the HGP released its new data daily), although, unlike the publicly funded project, they would not permit free redistribution or commercial use of the data. In March 2000, President Clinton announced that the genome sequence could not be patented, and should be made freely available to all researchers.The statement sent Celera's stock plummeting and dragged down the biotechnology-heavy Nasdaq. The biotechnology sector lost about $50 billion in market capitalization in two days. Although the working draft was announced in June 2000, it was not until Feb ruary 2001 that Celera and the HGP scientists published details of their drafts. Special issues of Nature (which published the publicly funded project's scientific paper)[7] and Science (which published Celera's paper[8]) described the methods used to produce the draft sequence and offered analysis of the sequence.These drafts covered about 83% of the genome (90% of the euchromatic regions with 150,000 gaps and the order and orientation of many segments not yet established). In February 2001, at the time of the joint publications, press releases announced that the project had been completed by both groups. Improved drafts were announced in 2003 and 2005, filling in to ~92% of the sequence currently. The competition proved to be very good for the project, spurring the public groups to modify their strategy in order to accelerate progress. The rivals initially agreed to pool their data, but the agreement ell apart when Celera refused to deposit its data in the unrestricted public data base GenBank. Celera had incorporated the public data into their genome, but forbade the public effort to use Celera data. HGP is the most well known of many international genome projects aimed at sequencing the DNA of a specific organism. While the human DNA sequence offers the most tangible benefits, important developments in biology and medicine are predicted as a result of the sequencing of model organisms, including mice, fruit flies, zebrafish, yeast, nematodes, plants, and many microbial organisms and parasites.In 2004, researchers from the International Human Genome Sequencing Consortium (IHGSC) of the HGP announced a new estimate of 20,000 to 25,000 genes in the human genome. [9] Previously 30,000 to 40,000 had been predicted, while estimates at the start of the project reached up to as high as 2,000,000. The number continues to fluctuate and it is now expected that it will take many years to agree on a precise value for the number of genes in the human genome. History In 1 976, the genome of the virus Bacteriophage MS2 was the first complete genome to be determined, by Walter Fiers and his team at the University of Ghent (Ghent, Belgium). 10] The idea for the shotgun technique came from the use of an algorithm that combined sequence information from many small fragments of DNA to reconstruct a genome. This technique was pioneered by Frederick Sanger to sequence the genome of the Phage ? -X174, a tiny virus called a bacteriophage that was the first fully sequenced genome (DNA-sequence) in 1977. [11] The technique was called shotgun sequencing because the genome was broken into millions of pieces as if it had been blasted with a shotgun.In order to scale up the method, both the sequencing and genome assembly had to be automated, as they were in the 1980s. Those techniques were shown applicable to sequencing of the first free-living bacterial genome (1. 8 million base pairs) of Haemophilus influenzae in 1995 [12] and the first animal genome (~100 Mbp) [1 3] It involved the use of automated sequencers, longer individual sequences using approximately 500 base pairs at that time. Paired sequences separated by a fixed distance of around 2000 base pairs which were critical elements enabling the development f the first genome assembly programs for reconstruction of large regions of genomes (aka ‘contigs'). Three years later, in 1998, the announcement by the newly-formed Celera Genomics that it would scale up the shotgun sequencing method to the human genome was greeted with skepticism in some circles. The shotgun technique breaks the DNA into fragments of various sizes, ranging from 2,000 to 300,000 base pairs in length, forming what is called a DNA â€Å"library†. Using an automated DNA sequencer the DNA is read in 800bp lengths from both ends of each fragment.Using a complex genome assembly algorithm and a supercomputer, the pieces are combined and the genome can be reconstructed from the millions of short, 800 base pair fr agments. The success of both the public and privately funded effort hinged upon a new, more highly automated capillary DNA sequencing machine, called the Applied Biosystems 3700, that ran the DNA sequences through an extremely fine capillary tube rather than a flat gel. Even more critical was the development of a new, larger-scale genome assembly program, which could handle the 30-50 million sequences that would be required to sequence the entire human genome with this method.At the time, such a program did not exist. One of the first major projects at Celera Genomics was the development of this assembler, which was written in parallel with the construction of a large, highly automated genome sequencing factory. The first version of this assembler was demonstrated in 2000, when the Celera team joined forces with Professor Gerald Rubin to sequence the fruit fly Drosophila melanogaster using the whole-genome shotgun method[14]. At 130 million base pairs, it was at least 10 times large r than any genome previously shotgun assembled.One year later, the Celera team published their assembly of the three billion base pair human genome. How it was accomplished The IHGSC used pair-end sequencing plus whole-genome shotgun mapping of large (~100 Kbp) plasmid clones and shotgun sequencing of smaller plasmid sub-clones plus a variety of other mapping data to orient and check the assembly of each human chromosome[7]. The Celera group tried â€Å"whole-genome shotgun† sequencing without using the additional mapping scaffolding[8], but by including shredded public data raised questions [15].Whose genome was sequenced? In the IHGSC international public-sector Human Genome Project (HGP), researchers collected blood (female) or sperm (male) samples from a large number of donors. Only a few of many collected samples were processed as DNA resources. Thus the donor identities were protected so neither donors nor scientists could know whose DNA was sequenced. DNA clones from m any different libraries were used in the overall project, with most of those libraries being created by Dr.Pieter J. de Jong. It has been informally reported, and is well known in the genomics community, that much of the DNA for the public HGP came from a single anonymous male donor from Buffalo, New York (code name RP11). [16] HGP scientists used white blood cells from the blood of 2 male and 2 female donors (randomly selected from 20 of each) — each donor yielding a separate DNA library. One of these libraries (RP11) was used considerably more than others, due to quality considerations.One minor technical issue is that male samples contain only half as much DNA from the X and Y chromosomes as from the other 22 chromosomes (the autosomes); this happens because each male cell contains only one X and one Y chromosome, not two like other chromosomes (autosomes). (This is true for nearly all male cells not just sperm cells). Although the main sequencing phase of the HGP has been completed, studies of DNA variation continue in the International HapMap Project, whose goal is to identify patterns of single nucleotide polymorphism (SNP) groups (called haplotypes, or â€Å"haps†).The DNA samples for the HapMap came from a total of 270 individuals: Yoruba people in Ibadan, Nigeria; Japanese people in Tokyo; Han Chinese in Beijing; and the French Centre d’Etude du Polymorphisms Humain (CEPH) resource, which consisted of residents of the United States having ancestry from Western and Northern Europe. In the Celera Genomics private-sector project, DNA from five different individuals were used for sequencing. The lead scientist of Celera Genomics at that time, Craig Venter, later acknowledged (in a public letter to the journal Science) that his DNA was one of those in the pool[17].On September 4th, 2007, a team led by Craig Venter, published his complete DNA sequence[18], unveiling the six-billion-letter genome of a single individual for the first time . Benefits The work on interpretation of genome data is still in its initial stages. It is anticipated that detailed knowledge of the human genome will provide new avenues for advances in medicine and biotechnology. Clear practical results of the project emerged even before the work was finished.For example, a number of companies, such as Myriad Genetics started offering easy ways to administer genetic tests that can show predisposition to a variety of illnesses, including breast cancer, disorders of hemostasis, cystic fibrosis, liver diseases and many others. Also, the etiologies for cancers, Alzheimer's disease and other areas of clinical interest are considered likely to benefit from genome information and possibly may lead in the long term to significant advances in their management. There are also many tangible benefits for biological scientists.For example, a researcher investigating a certain form of cancer may have narrowed down his/her search to a particular gene. By visiti ng the human genome database on the worldwide web, this researcher can examine what other scientists have written about this gene, including (potentially) the three-dimensional structure of its product, its function(s), its evolutionary relationships to other human genes, or to genes in mice or yeast or fruit flies, possible detrimental mutations, interactions with other genes, body tissues in which this gene is activated, diseases associated with this gene or other datatypes.Further, deeper understanding of the disease processes at the level of molecular biology may determine new therapeutic procedures. Given the established importance of DNA in molecular biology and its central role in determining the fundamental operation of cellular processes, it is likely that expanded knowledge in this area will facilitate medical advances in numerous areas of clinical interest that may not have been possible without them. The analysis of similarities between DNA sequences from different organ isms is also opening new avenues in the study of the theory of evolution.In many cases, evolutionary questions can now be framed in terms of molecular biology; indeed, many major evolutionary milestones (the emergence of the ribosome and organelles, the development of embryos with body plans, the vertebrate immune system) can be related to the molecular level. Many questions about the similarities and differences between humans and our closest relatives (the primates, and indeed the other mammals) are expected to be illuminated by the data from this project.The Human Genome Diversity Project, spinoff research aimed at mapping the DNA that varies between human ethnic groups, which was rumored to have been halted, actually did continue and to date has yielded new conclusions. In the future, HGDP could possibly expose new data in disease surveillance, human development and anthropology. HGDP could unlock secrets behind and create new strategies for managing the vulnerability of ethnic groups to certain diseases (see race in biomedicine). It could also show how human populations have adapted to these vulnerabilities. The Human Genome Project When populations start to die there are only so many to choose from for genes. A founder effect will then be created (Welsch 73). The Human Genome Project set out to identify all the genetic material in humans (Welsch 265). Another type of variation is different from genes it is physiological. Our blood type is a protein on our red blood cells and delivers oxygen and immune responses ( Welsch 267). We are only able to give blood to those who have our same blood type unless we have the blood type that is the universal donor. We have a friend who has suffered miscarriages, the most recent was 26 weeks along. Her body keeps rejecting the baby and they are not sure what the cause is. They are sure that it is not the RH factor. The white blood cells also have their own set of proteins, the human leukocyte antigen system (HLA). This system protects our bodies from foreign objects or infectious agents (Welsch 268). Even within our families we are varied because we will not all have the same combination of the system. We all react to infections and diseases differently. My husband is highly allergic to artificial smells. His system seems to be in overdrive. When he was in the military his bunk mate sprayed scented aerosol deodorant and his throat closed up. He then realized he could not handle anything artificial. My friend's cousin had a double lung transplant last year. Several months after her transplant she got an infection and her body rejected her new lungs and she passed away. I think her rejection to the new lungs was because of the differences in the HLA system of her body and the donor's. Our bodies also adapt and look different from others in our skin tone and our body types. These traits are not as significant in our bodily functions but are varied nonetheless. W all can have different hair color, skin color, and shape and sizes. Our skin does not really have color, it has a pigment called melanin ( Welsch 271). Depending on where the person lived they may have more melanin production and have darker skin. Some can also be tall and skin or short and chubby. We measure this through the anthropometry. It helps determine the variations we see. We put these measurements in the cormic index, which is sitting height to standing height ( Welsch 273). The intemembral index is the ratio of arm length to leg length (Welsch 273). Body fat is determined by the BMI or body mass index. A person can be too skinny or too fat and have a BMI that is not healthy. Another variation is race. This our society's system for classifying people based on how they look. These differences are believed to reflect the root of genetic and biological differences. We also adapt to the environments we encounter. We can either allow our environment to change us or we can change the environment. To survive we have to figure out what needs to change and react accordingly. We have to have a certain plasticity. We all change during our lifetime and it comes somewhat from our surroundings. We can perform niche construction and make our environment suitable to our living conditions. On the farm my in laws own they do several things to insure their success. They have to give the cows shots to make sure they are healthy enough for reproduction and the babies will be healthy enough to be sold. They take care of the grass and the other parts of the land to ensure the cows are fed during the spring, summer and fall. They make sure that there is enough hay to feed them during the winter. As parents we have the ability to help our children adapt. To set them up for success in life as humans. We teach our children how to cook, clean, read, and write. The ability to care for themselves spans across generations. They will teach their own children these abilities to adapt and survive in the world around them. We pass this on to them through extra-genetic inheritance. We have a new emergence of new species through speciation. Differences can be so vast that it becomes a totally different species. Such as the dog and the wolf. Both have canine but the wolf is considered a different species. Evolution takes place as we experience different things in our culture. We have to adapt as our culture changes. The constructivist approach shows that our biology is a process of construction (Welsch 239). Our bodies work in combination with our genes to affect how genes can be expressed or epigenetic system of inheritance ( Welsch 240). When our genes are altered we can pass those down to our children affecting how their bodies work and how they behave. The way we raise our children affects how they will behave as adults. If we are nurturing, loving and kind to our children almost all of the time these will be the traits they possess unless they have something else going on biologically. If we behave negative with our children and this is all they see they will in turn possess those traits. This is the behavioral system of inheritance. We also store symbols and communicate them with others around us, showing the world our understanding through them. The symbols we use come from the symbolic system of inheritance. Through manipulating the world around us and changing the world around us it is important to our biocultural evolution. Change is an important part of who we are. Just as when we move into a new home, a new town, new school, and even a new job we change and construct the environment to fit our needs. We do certain things so we can fit in and feel comfortable. It allows us to thrive. We even try to change the land we live on. Another aspect of biocultural evolution is the evolution of our behaviors. Sociobiology explains our behaviors as related to our biological component (Welsch 245). Our behavior can also be influenced by the earth and social things going on around us. This comes from the human behavioral ecology (HBE) (Welsch 246). We adapt our behavior to our society so that we can fit and continue to evolve. Our behaviors are directly connected to our biological self. This comes from biological determinism (Welsch 247). Some of them come forward or (emergence) based on who we see and interact with in our daily lives. We adapt and change through our diet, moving to different places, and sometimes we even change our bodies through modification to make ourselves fit in. Just like runway models who extreme diet and workout to be tiny enough to be considered for the runway. This shapes our cultures around the world and how we all view each other. Everyone in this world is so unique. No two people even family members will be completely identical. Our bodies adapt and varied through the generations to be continued successfully. We all try to fit in with our behaviors so that our true biological self can come forward. We need to be conscious in the things we teach our children because they will be the next generation and bring forth a new culture. Works CitedWelsch, Robert Louis, et al. Anthropology: Asking Questions about Human Origins, Diversity, and Culture. Oxford University Press, 2017.

Friday, September 27, 2019

Arthur Miller's use of capitalism in death of a salesman Research Paper

Arthur Miller's use of capitalism in death of a salesman - Research Paper Example He held on to the former taught business ideals of individualism from the previous generation to use in the present, which could not be acceptable revolutionized society. According to Karim, Willy’s failure resulted from his inability to revolve but continued to apply the ‘winner take all business’ principle in an urban society that was passed that stage (67). One could consider him as an outdated individual who clings in the past knowledge in the hope that he can attain a goal, even after having to depend on the same wisdom in the past. He believed that winning the trust and likeness from the society, was the ultimate way to achieve his long waited success. However in the past, as the transition from old ways were been replaced by the modern methods of conducting businesses, Willy as one of the American people who held on to the former faith of individualism, as an early frontier ethic in business had the several opportunities of his former success, but he could no longer manage to compete in the climate of the business that was favored by capitalism. Everyone plans to live a happy and satisfied life. However the means to achieve the desire varies in different personalities. For some, even if it takes illegal means, it does not matter as long as the end goal is achieved. Others prefer honesty and integrity as a moral character and value. This was what directed the history of America towards certain individuals’ success before capitalism in early 19th Century, as illustrated by the stories told, for whatever it took to become successful (Cullen 60). Willy Loman was no different, and he strived hard in his sales job to sustain his family, and fulfill his desires of living an American dream. His Social status best expressed as a middle class was accompanied with hardship in acquiring wealth, and hence he had to depend on how the society would take him, based on likeness for him to thrive in the sales job. The principle of self made man, and though being helpful before, failed as capitalist would attain the American dream more easily than using the former strategy. The growing capitalism taking over in the business world forced Willy out of the sales job, because it came with better ways of producing and distributing goods, for much more profit that Willy could not keep up with. Loman suffers frustration after been declined for a job knowing he had retired as a salesman, which he struggled tirelessly all through his life. Through capitalism, power is associated with capitalist like Howard, who dares fire Willy after his long service in the company, without even minding the moral decency of setting him aside for retirement (Sterling 5). Indications of an old car, non profit making individual, and financial struggles show that his financial status was worse to raise capital, so that he could start a business of his own closer to home. As an investor, Howard hoped for delivery of an efficient service as he also paid w ages to his workers, which determined his profit too. As a capitalist, it would then be arguable whether Willy’s firing was justified or not. The aim of capitalism is to acquire more profits after sales and production. Capitalism will make use of the working class to efficiently expand the profit margin,

Thursday, September 26, 2019

Edit the google part and make the conclusion more longer Essay

Edit the google part and make the conclusion more longer - Essay Example Some may argue that they used unorthodox methods in order to get to their status. This is because by innovating new ways of management, they succeeded in doing the unthinkable. Google "asked 45 year olds for their GPAs" (Lashinsky, 2008); Apple tied its proprietary software with its proprietary hardware (Kahney, 2008), and SEMCO eliminated time clocks for employees (Semler, 1989). In this essay, we will study the different methods these companies run, and how it has made them successful. We will also suggest how these same systems can become an eventual detriment. We will give a review of each from articles and will make connections between them. In the article How Apple Got Everything Right by Doing Everything Wrong, a unique and old-fashioned strategy is utilized. This strategy is the main reason Apple is one of the most dominant and successful start-up companies in the market. Steve Jobs is the spokesperson for Apple and is featured as the â€Å"evil genius.† Furthermore, Steve Jobs is not just a public face, but instead, he is the brains behind a vast majority of Apple’s innovative ideas and operations of the company. Apple has expressed great entrepreneurial merits by envisioning the gaps in the market. These gaps represent the difference between what the market needs and Apple’s current product offerings. The company intends to do this without attempting to copy from the existing companies. This includes creating new categories that have become must-have products. Apple has been operating in a highly challenging market where it is constantly exposed to intense competition and close imitation. For this reason, Apple formulated a strict security of the development of their products. Often the team members of Apple were unaware of the outcome of the product design. The product design of Apple is rapidly changing, which creates product obsolescence and interdependence between hardware, software, and internet applications— these are some

Corporate Communication Essay Example | Topics and Well Written Essays - 2000 words

Corporate Communication - Essay Example As these different workers would try to impose their own attitudes and culture on the organisation as well as fellow employees, it could lead a different, uncommon and complex organisational culture, negatively impacting the organisation’s performance. Thus, for organisation to succeed, all its employees have to work in unison without any differences and for that a common, clear and workable organisational culture need to be implemented in the organisation. To implement a common organisational culture, organisations can even go for a organisational change. That is, as it will be difficult to force common organisational only in some segments of the organisation, it would be better, if the organisation goes for organizational change. When the organization does not perform up to expected levels due to culture issues and in other cases wanted to expand or diversify its operations, the management method has to be changed. This is where the concept of organizational change comes into the picture. That is, organizational change constitutes the structured changing or transitioning of employees, departments and the organizations as a whole from a current state to a favourable or desired future state. So, here the main need or necessity for an organisation to change is to implement a common organisational culture, thereby maximize the collective advantages or benefits for all the employees, managers and leaders working for the organization, and thereby maximize the profit and standing of the organization. So, this paper as part of literature review will discuss how implementing a common organisational culture will lead to organizational change and how leaders and managers had to be aware and importantly control these ch anges by case studying Starbucks. When an organization initiates the process of change management, the first main role the leader should perform is build an academically, technically strong and experienced workforce as part of the

Wednesday, September 25, 2019

Analyzing an ELS learner piece of language Assignment

Analyzing an ELS learner piece of language - Assignment Example The orthography poses a great problem as English is written from left to right while Arabic language is written from right to left. Keeping in view all these problems, a teacher of English has to perform a very difficult task in making his students master English language. The present study probes into different aspects regarding errors made by Arabic speakers during ESL learning. A sample of composition from an Arabic speaking student has been analyzed to highlight commonly occurring errors. In the following paragraphs we will first point out the errors and then find out the possible cause of these errors and in the end certain remedial measures would be suggested. Different researchers have also paid attention to this issue and have studied Arabic speaking learners to find out the possible solutions. The persistence of these errors suggests that some pedagogical intervention to raise students consciousness about them is necessary.( Cowan,2008).Lado(1957) hypothesized that errors in the second language (L2) are caused by the interference of the students native language. Such errors reflect the students inability to separate L1 and L2. Therefore, a contrastive analysis of L1 and L2, he thought, will help predict the areas of difficulty in L2. Odlin (1989): James (1980); Brown (1980) pointed out that students’ errors in L2 are caused by several processes. These include transfer, overgeneralization and communication strategies. There are many problematic areas for the students of English language in Arab countries. From the very beginning, he/she realizes that he/she is learning a different language which has many sounds which are not present in their mother tongue. The sounds which become difficult for the Arab learners are: The Arabic speakers mostly replace / p/ with / b/ sound that is the reason that they feel difficulty in pronouncing words like People, popular , perpetuate .In this case we will hear /b/ sound instead

Tuesday, September 24, 2019

Economy or goberment related Personal Statement Example | Topics and Well Written Essays - 500 words

Economy or goberment related - Personal Statement Example One can clearly see how Economics has overtaken all other fields. Wolfers (2015) gives an explanation of the development in question by arguing that the Great Depression was the major reason why Economics took over. The government needed to devise a way that would relieve the country from the economic strains that it had gone through. The economist came in handy. Major focus and importance was given to Economics as it gave answers to the existing problems as opposed to Psychology or even Anthropology (Wolfers, 2015). In the work, the author also explains that Economists are also consulted in numerous fields today, including fields that touch on social issues (Wolfers, 2015). This explains why Economics has become a major for many students, as the field is extremely marketable in the job market. The popularity of the field is also expected to increase with the years. From Wolfers’ (2015) work, I agree that the field of economics has taken over the field of Social Sciences. In the present society, it is evident that a huge percentage of articles, even in the archives, mention concepts related to economists. The number of articles on Psychology, Sociology and other arts are reducing by the day. This explains the extent in which the world is shifting towards an economic turn. Wolfers (2015) also explains that the rise of economy began in the 1980s to date. This can be linked to the Great Depression that caused massive impacts on the economy of the country. After the catastrophe, the government opted to come up with stringent measures that would prevent such an occurrence (Wolfers, 2015). This explains the great interest in economics. I agree with the author’s sentiments as the government was obligated to come up with measures that would see to a stable economy. The historians that had taken up a huge share of the market had no place, and were slowly overtaken (Wolfers, 2015). I believe the economists were justified. No government

Monday, September 23, 2019

Local and Federal Sharing of Information for Law Enforcement Essay

Local and Federal Sharing of Information for Law Enforcement - Essay Example This plan was put together by the DHS and the FBI in order to share information between their two systems. The overall aim of iDSN is "to achieve biometric-based interoperability with a reciprocal exchange of a small subset of DHS and FBI data. The FBI subset will include information on individuals with outstanding warrants for which biometric information exists ("Wanted Person File"). The DHS subset will include information on individuals who have been denied Visas or aliens who have been expeditiously removed from the United States." (Federal Bureau of Investigation, n.d.) Therefore, this database will allow both groups to access information about the various agencies. Data will be shared between the two agencies, and this includes copies of the database's fingerprint information in order to assist with the comparison of fingerprints. Furthermore, the shared information also allows other data to be included, such as criminal history, biography, and any other relevant history which may also be significant above and beyond fingerprint sharing. All data is stored and accessible in the System of Records. Users will also be able to access the FBI maintained criminal history of each individual through the database.

Sunday, September 22, 2019

Two theories of motivation Essay Example for Free

Two theories of motivation Essay The subject of motivation can be approached from a number of perspectives. Some theories approach motivation as coming from within a person (Drive Theory), whereas other theories approach motivation as coming from within the person (Incentive Theory). Compare and contrast two theories of motivation explaining how the two approaches may differ and how they may be similar. Does one theory seem to explain motivation better than the other? Support your argument with examples from each theory. Motives are reasons people hold for initiating and performing voluntary behaviour. They indicate the meaning of human behaviour, and they may reveal a persons values. Motives often affect a persons perception, cognition, emotion and behaviour. A person who is highly motivated to gain social status, for example, may be observant of marks of social distinction, may think often about issues that pertain to wealth, may especially enjoy the feeling of self-importance, and may behave in ways associated with upper-class status . By defining motives as reasons, we do not imply that motives are primarily cognitive; any more than establishing a motive for crime in a court of law requires conscious premeditation. A person can have a reason to behave, and thus a motive, without necessarily being aware of it. Aristotle (330BCE/1953) divided motives into ends versus means on the basis of the individuals purpose for performing the behaviour. Ends are indicated when a person engages in a behaviour for no apparent reason other than that is what the person desires to do. Examples include a child playing with a ball for physical exercise and a student reading a book out of curiosity. In each of these examples, the goal is desired for its own sake. In contrast, means are indicated when a person performs an act for its instrumental value. Examples include a professional athlete who plays football for a salary and a student who studies to improve a grade. In each of these examples, the goal (salary, grade) is desired because it produces something else. A person might seek a salary, for example, as a means of enhancing social status, or high grades as a means of pleasing a parent. An analysis of a persons behaviour may identify a series of instrumental acts followed by one or more end goals that complete a behaviour chain.  For example, a person may take a second job for the extra salary (instrumental motive), desire the extra salary to purchase health insurance (instrumental motive), and desire the health insurance to benefit their family (end goal). This example of a behaviour chain shows three behaviours, two motivated by instrumental goals and a third by an end goal. Logically, only goals that are desired for their own sake can serve as the end of a purposeful explanation of a series of human acts (Reiss, 2003). The number of instrumental motives is, for all practical purposes, unlimited. Only imagination limits how may different ways individuals can pursue the end goal of, say, power. In contrast, the number of ends is limited by human nature (Reiss, 2003). Two theoretical perspectives have been advanced concerning end goals. Multifaceted theory holds that the various end goals are largely unrelated to each other, perhaps to the point where they are genetically distant sources of motivation with different evolutionary histories. Multifaceted theorist include philosophers who have suggested lists of the most fundamental motives of human nature (Eg Spinoza, 1675/1949), psychologists who have put forth evolutionary theories of motivation (Eg McDougall, 1926) and psychologists who have suggested theories of human needs (Eg Murray, 1938). In contrast, unitary or global theorists hold that end goals can be profitably reduced to a small number of categories based on common characteristics. Unitary theorists seek the underlying psychological principles that are expressed by diverse motivational events. The ancient Greek philosophers, for example, reduced end goals into categories expressing the needs of the body, mind and soul (Eg Plato, 375 BCE/1966). Hedonists distinguished between end goals associated with the pleasures enhanced and those related to pain reduction (Russell, 1945). Freud (1916/1963) reduced motives to sexual and aggressive instincts. Today, some social psychologists classify end goals into two global categories, called drives (or extrinsic motivation) and intrinsic motives  (IMs). The distinction has been influential 1,921 scholarly publications on intrinsic motivation (IM) appeared during January 1967 and the present day (source: PsycINFO). IM has been investigated in social psychology (eg Ryan Deci, 2000), developmental psychology (eg, Harter, 1981), clinical psychology (eg Eisenberger Cameron, 1996), organisational psychology (eg, Houkes, Janssen, de Jonge, Nijhuis, 2001), and eduational psychology (eg, Kohn, 1993). Drive Theory Thorndikes (1911) law of effect reduced human motivation to categories of reward and punishment. This law holds that responses are strengthened when they lead to satisfaction and weakened when they lead to punishment. Psychologists studying learning soon realised Thorndikes law is a tautology or a proposition that is circular (true by definition). The following statements, for example, are circular with respect to each other: Rewards strengthen behaviour and Any event that strengthens behaviour is a reward. The concept of drive was introduced to escape from the circularity of the law of effect (Brown, 1961). Instead of identifying reward as any stimulus or satisfying event that strengthens behaviour, drive theorists defined it as a reduction in a state of deprivation. The statements Drive reduction strengthens behaviour and Drive reduction occurs when a state of deprivation is lessened are not circular to each other. Hull (1943) recognised four types of drives: hunger, thirst, sex and escape from pain. In many animal learning experiments, investigators have induced drives by depriving animals of an important need prior to the experiment. The deprivation of food, for example, establishes food as a powerful reward, increasing the animals motivation to learn responses that produce food (Skinner, 1938). Much of animal learning theory is based on the results of psychological studies with food-deprived or water-deprived animals. Unitary Intrinsic Motivation Theory The unitary construct of IM was put forth as an alternative to drive theory. The initial insight was that many of the motives not explained well by drive theory motives such as exploration (curiosity), autonomy, and play have common properties. To a large extent, unitary IM theory initially represented an attempt to show the essential differences between drives and what psychodynamic theorists have called ego motives. In the past, the distinction between drives and IMs has been thought to have a physiological basis, at least according to some published remarks. The general idea was that drives such as hunger and thirst arise from tissue needs involving peripheral components of the nervous system, where as IMs arise from psychological or cognitive processes involving primary central neural activity. Deci (1975), for example, wrote that the primary effects of IM are in the tissues of the central nervous system rather than in the non-nervous system tissues (pg 61). The physiological paradigm for distinguishing drives from IMs always lacked scientific support; indeed, we now know that it is physiological nonsense. Motives such as hunger and thirst, for example, involve significant central nervous system or cognitive activity (Berntson Cacioppo, 2000). Both the behaviourist concept of drive and the concept of IM as nondrive have no precise physiological meaning and originally were put forth at a time when little was know about the physiology of motivation. Conclusion Since antiquity, scholars have debated whether human motives can be reduced to a few global categories. Ancient Greek philosophers, for example, distinguish between motives associated with the body (such as hunger and thirst) and those associated with the intellect (such as curiosity, morality and friendship). In the early part of the 20th century, Freud (1916/1963) argued that all motives are ultimately linked to sex. Hedonists, on the other hand, reduced all motives to pleasure seeking versus pain avoidance. The concept of IM can be viewed as a modern example of the effort in motivation reductionism. IM theorists divide motives into two global  categories: drives (as called extrinsic motivation) and intrinsic motivation. Drives are about biologic survival needs, whereas IMs pertain to what some have called ego motives. Hunger, thirst, and pain avoidance are paradigm examples of drives, whereas curiosity, autonomy, and play are paradigm examples of IMs.

Saturday, September 21, 2019

Characteristics of problems

Characteristics of problems Characteristics of Problems Determining the type of problem to be solved is particularly difficult. From the scientific point of view it has not been treated sufficiently yet.   It is, nevertheless, of fundamental importance because it covers the whole field of creativity, and the problem solver(s) heuristic behavior is contingent on the type of problem.   What is a problem?   This question was asked and answered by Karl Duncker (1945).   Duncker, who was a Gestalt psychologist, defined a problem in these words: â€Å"A problem arises when a living organism has a goal but does not know how this goal is to be reached.†Ã‚   This definition is, no doubt, very useful, because creativity tasks and activities always strive to address a problem.   Yet, Dunckers definition and formulation poses these caveats:   It is necessary to distinguish between a task and a problem.   It is the subjects level of domain knowledge, including his ability to find pertinent knowledge, if necessary, that makes the difference between the two. A task set by a researcher or experimenter may be a problem to certain subjects and no problem to others.  Ã‚   A problem may vanish or be resolved if the subject changes his goal.   A problem does not exist de facto, unless the subject observes discrepancies between his current situation and the goals he pursues. Reitman (1965) proposed that problems be viewed as three-component entities, having an initial state, a final (goal) state, and a set of processes that facilitate reaching the goal, starting from the initial state.   Minski (1961) proposed a distinction between two types of problems, those that according to the nature of the conditions of acceptability of solutions are either well defined or ill-defined.   A problem satisfying Reitmans conditions (Reitman, 1965) is a so-called well-defined problem: it can be solved by applying a systematic procedure that makes it possible to decide whether a proposed solution is correct or not.   It means that it is totally decidable: all pertinent solutions can be evaluated strictly using one binary variable: right or wrong.   The solution can thus be described as an all-or-nothing phenomenon.   There are no intermediate solutions between the functional and non-functional ones.   In general terms, any tests for which there exists a rigorous method of comparison between what is proposed and what is required is a well-defined problem. Examples of well-defined problems are board games, problems in mathematics, or problems in logic.   They may be very difficult to resolve, nevertheless.   Taking mans limited resources, psychologists face the task of explaining how human beings manage to solve problems in chess, mathematics or geometry within reasonable time.   Ill-defined problems are those that are not well-defined.   They result in a multitude of solutions that cannot be classified by using a binary truth-value, but by using a relative qualitative scale.   The response to a requirement thus allows grades, the determination of which is left to the referees.   The majority of problems occurring in everyday life are ill-defined problems: the improvement on an object or an apparatus, a new use of what already is known, the search for a sales idea or a marketing idea, etc. Ill-defined problems arise when some components of the problems statement, in the sense of Reitman, are unspecified, or are vague or fuzzy.   The definedness of problems varies in degree (Reitman, 1965, Ch. 5).   For instance, ‘take a little flower and bake bread for these people, which is vague in terms of the quantity of flower and the number of people, but specifies clearly the method: bake.   Another statement may run like this: ‘Let us overcome the current economic crisis.   This statement does not specify the method: what should be done to overcome the crisis?   ‘Do not just hang around, maximize something is an exhortation taken from a cartoon, in which both the initial state, the method and the goal are shrouded in a mental fog.   Ill-defined problems are more common than are well-defined problems, but it is all the more difficult to explain how to tackle them. It is worth noting that Minskis postulate does not necessarily cover the distinction between problem solving and creativity. For instance, the discovery of a new algorithm, or a new combination of known algorithms, is a creative act. But well-defined problems in the sense of Minsky may lead to an opposition between algorithmic procedures and inferential procedures. As for the ill-defined problems, Reitman (1964) proposed a typology of six classes of problems comprising the transformation or generation of states, objects, or collections of objects.   This taxonomy is not presented as a universal tool covering the whole field of creative situations, but simply as a general structure making it possible to collect the largest number possible of the creative situations.   This attempt at systemization has mainly a descriptive value, but it is not unlikely that it could also be used for deducing hypotheses related to the behavior of effectual solutions. Reitmans work is based on the introduction of the following three concepts: let A be an initial state or object (one which is expected to undergo transformation, modification, complementing, improvement, etc.) and let B be a final state or object (the solution to be obtained, elimination of problem).   Let the symbol à ®Ã¢â‚¬ ¦Ã…’ denote a process, program, or sequence of operations.   It is then possible to represent a large number of problematic situations parting from these three symbols by representing them by a general vector [A, B, à ¢Ã¢â‚¬ ¡Ã¢â‚¬â„¢].   Using these three concepts, six types of poorly defined problems can be distinguished. Type I.   The initial and terminal states A and B are well specified: the relevant data are known and the requirements to be satisfied are explained precisely.   The problem then consists in discovering the process à ¢Ã¢â‚¬ ¡Ã¢â‚¬â„¢ that makes it possible to pass from the well-specified state A to a well-specified state B.   For instance: how can a given function be incorporated in a specific device?   This type seems to cover a large class of problem situations. Type II.   The terminal state B is less precisely specified than in the previous type, while A is left entirely at the discretion of the experimenter.   In fact, nothing is said about the state, object or assembly of objects from which to part.  Ã‚   The initial material is largely undetermined and admits only one constraint to aid in constituting the one possible solution.   For instance: what should be done to make traveling by train more pleasurable? Here, obviously, the current state represents some level of train travel comfort or pleasure, and this should be increased.   But what exactly is to be achieved is an open question. Type III.   The initial state A consists in this case of an assembly of constituent parts each of which represents a concrete entity, while B represents a state or object to be achieved which is defined vaguely and is characterized by the fact that one or several of constituent parts of A have lost their separate identities after reaching B.   Reitman cites as an example Napoleons cook who was charged with the task to â€Å"make a good dish† B to celebrate the victory at Marengo using only available ingredients A.   This type is undoubtedly less general than the preceding ones.   Type IV.   A and B are presented as consisting of sub-components and are rather poorly defined.   This type differs from type II in that in the latter case there are no restrictions imposed on search, different analogous paths, and different associative paths the exploration of which can be relatively fruitful.   In type IV it is not like that.  Ã‚   The distinction between sub-components provides constraints within which the problem solution has to take place.   The research is, in other words, more strictly restrained than it is in the problems of type II. Type V.  Ã‚   The initial state A is given by reference to a well defined object, the final state B is given by a set of similarities and dissimilarities with respect to A.   An example given by Reitman to illustrate this type is the following: manufacturer ÃŽÂ ± of some equipment encounters a serious competition from ÃŽÂ ²-companys product.   The first company, ÃŽÂ ±, decides to change the design of the product in point to offer a price that is lower for a comparable quality than what its competitor ÃŽÂ ² asks.   The task thus does not necessarily require an entirely new manufacturing process, because the added cost of the new process would not help to slash the price according to original estimates.   Besides, the modification must be implemented fast because the competing product already is in the market while ÃŽÂ ±-companys sales decrease with each passing day.   The exigencies of this example illustrate the general type V product as a new device that m ust be functionally similar to the old version but must be cheaper. Type VI.   In this case, the final state B is well specified while the initial state A remains essentially empty, unstructured and largely undetermined.   Characteristic examples cited by Reitman comprise: to explain a new phenomenon, discover an alibi for a criminal deed, etc.   This type differs from type II in the degree of precision of the task.   It is thus possible to distinguish among six categories of poorly defined problems resorting to almost formal properties of their application.   A research activity the results of which would show that these categories incite heuristically different behavior on the part of individuals and groups still has to be accomplished.   A relevant taxonomy establishes first some ordering, i.e. introduces some logic in the pertinent knowledge field.   For this purpose the taxonomy distributes the phenomena or the entities considered according to their relevant characteristics, with no ambiguity involved.   It appears that, in general, each taxonomy displays at least two different utility values: First of all, the taxonomy presents a reference value that provides a framework for a certain subset of the universe.   The information already available about the elements of this subset thus cease to be fragmented and simply accumulated: in the continuation they are ordered with respect to one another.   They can be integrated and complemented.   Fragmented knowledge thus becomes systematic. This knowledge also represents an â€Å"operational† or heuristic value of the taxonomy in point. This value becomes apparent when the taxonomy leads to empirical research in order to validate its structure, its principle and its extent, or to uncover which variables of the taxonomy can be expected and unified.   In the case of problem solution and creativity research, one can try to establish some correspondence between certain types of tasks with certain behavioral phenomena, particularly those of psycholinguistic nature.   The first problem differentiation might take into consideration the different objective properties of problems: The problem is algorithmic: it can be resolved using an ordered sequence of specific operations.   It allows, in this sense, a truly coordinated division of labor, and is particularly suitable for groups with the centralized communication structure.   The problem is inferential: it can be visualized by means of trees, but the process of generalization of the trees cannot be decomposed into concatenated elementary operations.   A homogeneous structure is, however, more appropriate.   It can be seen that groups facing a specific situation adopt spontaneously the optimum organization to respond to this situation. Most authors, however, have resorted to local dichotomies based on a multitude of imprecise criteria.   The straightforward problem typologies are the following: Verbal and non-verbal tasks.   Verbal tasks are supposed to mobilize important cultural experience and imply the use of specific functions or hypothetic factors.   Non-verbal tasks are symbolic, or in other ways dependent on non-verbal perceptions. Intellectual and manipulation-dependent tasks.   In intellectual tasks, the principal operator is the brain.   Manipulation-dependent problems require a coordination of the brain and muscular factors.   Unique-solution and multiple-solutions tasks.   Then there are problems having a unique solution and problems having multiple solutions.   The totality of distinctions pertinent to a particular solution domain cannot be generalized, because their underlying criteria are too coarse and do not allow more than just a very summary control of the situation.   Shaws dimensional analysis In an attempt to present various aspects of group tasks in a systematic manner, Shaw (1963) collected a very eclectic set of 104 statements mostly taken from experimental literature.   The statements relate to both ill-defined and well-defined problems, to verbally and non-verbally formulated tasks, etc.   These various statements were evaluated according to six a priori defined dimensions, which can be visualized as continuously varying intervals in which each task occupies a point.   The six dimensions are characterized in the following manner: Requirements of cooperation.   This dimension permits to define the degree to which it is required that members of the group act in a coordinated manner to complete the task successfully.   It is thus a measure of dependence between the goal and the coordinated activity of the subjects.   Verifiability of the decision.   It is the degree to which the â€Å"rightness† or adequacy of the solution can be proved, either by reference to an authority, or by logical procedures (usually a mathematical proof), or by feedback (for instance by examining the consequences of the decision taken). Difficulty.   This is defined by Shaw abstractly as the quantity of effort necessary for executing the task.   Specifically, an indicator of difficulty can be the time required for solution, the number of errors made, etc. Clarity of purpose.   This denotes the degree of precision with which the requirements of the task are presented to members of the group, and how the members perceive the requirements. Multiplicity of approaches to the goal.   This dimension expresses the more or less great possibility to resolve the problem by various procedures.   It is thus a matter of possible paths to the solution, i.e. of the number of alternative solutions. Relationship between mental and motor requirements.   A task that only requires the implementation of intellectual activities will be among the strongest on this dimension.   Conversely, tasks requiring only motor abilities will be among the weakest.   A task requiring both intellectual and motor activities occupies an intermediate position between the two extremes. Intrinsic interest.   Problems are not equally attractive, i.e. they do not mobilize the same motivation.   This dimension is thus assigned the degree to which a particular task appears interesting to the subjects. Operational requirements.   This dimension was introduced to evaluate the number of different kinds of operations, knowledge or abilities required for the completion of the task. Familiarity within the population.   Individuals might have had a previous experience of the task in point, either direct or by means of an analogous task.   This dimension thus evaluates the relative â€Å"rareness† of a class of problems to a population. Multiplicity of solutions.   It is the number of different correct solutions for the problem in point.   That number can in general be evaluated exactly in a well-defined problem, but not if the estimate is very intuitive. This family of dimensions is intended to cover the maximum of traits occurring in every heuristic situation.   Certain dimensions thus relate to formal properties of the task, for instance numbers 2, 5, or 10, while others, e.g. numbers 7 and 9 refer direct to the consequences of applying a particular semantics (second level of determination).   Forty-nine referees, mostly graduate students of psychology, got the task of distributing the 104 sample tasks according to the 10 dimensions shown above.   Eight positions or degrees ordered by their magnitude were defined.   The judgments were consistent, except for the dimension â€Å"clarity of purpose†. With these data, Shaw got two factor analyses that resulted in disclosing five significant factors for task analysis: Difficulty, (factor I), the quantity of required effort displays a close relationship to the number of operations, knowledge, and required abilities for solving the problem.   The forth dimension, the â€Å"clarity of purpose† is equally an important aspect of difficulty: the less clear the goal is, the more difficult is the task judged to be.   Multiplicity of solutions (factor II) is a complex dimension that relates both to the number of acceptable solutions, to the diversity of paths leading to the solutions, and to the verifiability of a solution.   Shaw thinks that the essential aspect is the number of solutions, while the other two merely are its consequences.   While there are several solutions available, there also are several ways how to reach them. Proving the adequacy of each solution rigorously is hardly possible. Cooperation requirements (factor III) correspond exactly to the dimension of the same name.   The degree of completing a task successfully implies a coordinated action on the part of group members. The relationship between intellectual requirements motor requirements (factor IV) constitutes no doubt an independent dimension.   But it only shows a very weak correlation with the familiarity with the task within the population.   Familiarity in the population is considered a separate dimension for the same reason.   Nevertheless, it is necessary to point out that the familiarity seems relatively irrelevant, at least under the particular conditions of this work, where the majority of the tasks were somehow familiar to the subjects. Intrinsic interest (factor V), which corresponds to the intensity of motivation and the attraction exerted by the problem on the group members, too, is a dimension permeated with factor II. The first three of the six dimensions obtained finally seem to be both the most important and the least ambiguous ones.   It is of course possible, as Shaw himself notes, that there are other dimensions, equally important, which continued research could bring forth.   This first attempt will make it possible largely to control the principal components of the situation that comes into being as a problem to be solved is given to the subjects.   This is the only condition under which accumulation of experimental data in this field can be transformed into scientific knowledge. Categorization by Roby and Lanzetta Roby and Lanzetta (1958) proposed a model intended to define and highlight the most important characteristics of a group task.   For this purpose, they distinguish four sets of events occurring in the functioning of any group task system: a. A set Tiof task input data.   Here belong, for instance, the formulation of the problem to be solved and of the material it implies. b. A correlative set Giof initial activities of the group.   These comprise, among others, waiting times, observation, data recording, communication associated with input variables, etc. c. A set Goof outputs produced by the group.   In the creative process these comprise the traces of the heuristic process and solution suggestions. d. A set Toof environmental changes following from the groups activities. Roby and Lanzetta define three general types of properties: Descriptive aspects, including the qualitative nature of various events, their number, and metric properties. Distribution of the events in the space or by relation to other events. Functional aspects of events, i.e. their temporal occurrence as a function of foregoing events (sequential analysis). Each set of events, Ti, Gi, Go and To, can be studied and related to according to these three types of properties.   In theory at least, it is possible to characterize any group task, and in particular any creative situation, using a double-entry table for 12 cases. This is the formal equipment of the descriptive system of group tasks proposed by Roby and Lanzetta.   In an abstract analysis, however, this representation does not make the understanding of a truly psychological meaning of a specific task possible.   This remark led the authors to propose a complementary notion of â€Å"critical exigencies†.   This concept was introduced to cover the fact that each task requires certain behavior on the part of the group to be correctly executed, and calls for certain specific types of activities to be carried out.   The implementation of these requirements should thus help to reduce the discontinuity mentioned between the structural properties of the task and the psychological or psychosocial phenomena generated by its handling.   It is a different manner of contrasting the general and the particular.   In a way, this is what was above called the â€Å"second level of determination†. Roby and Lanzettas intention was not to put forward a theory permitting to characterize the problems rigorously, but rather to present a table for the analysis of systems of group tasks.   Their framework thus permits theoretically to classify any task parting from the values relevant to the task in the 12 boxes of the analysis table, but it does not make it possible to classify the types of tasks using a specific corpus of formal properties.   Thus, Roby and Lanzetta did not forge a typological tool, but, rather, a descriptive tool the general purpose of which is found precisely in the fact that the tool is deemed able to adapt itself to any task.   The goal of their work was not to distribute the generalized variable â€Å"task structure† on an arbitrary scale, but rather to find a set of invariant characteristics that would make it possible to situate the various problems that appear in the life of a working group.   Creative problems constitute in this context evid ently merely a special case.   It follows that the effort to determine the â€Å"invariants† of the analysis is probably of utmost importance and should complement any typological effort. Finally, an adequate taxonomy of poorly defined problems must comprise a meta-linguistic analysis of their formulation in the natural language: it must be possible to establish a rigorous correspondence between a formal type and the multitude of its verbal expressions or concretizations and, in parallel, part from a specific semantics to reach a logical class it illustrates.   Roqutte (1975) sketches the first attempt in this respect. Psychologists studying the ways people solve problem have adopted a reasonable strategy.   They study how people handle seemingly well-defined problems, and then apply theprocedure to the study of ill-defined tasks.   In some instances shortcuts to solving an ill-defined problem are possible: seek a well-defined version of the same problem and try to solve it, or find a new definition of the problem.   Defininition or interpretation of the problem is as important in tackling well-defined tasks as it is in working with ill-defined tasks.   Adversary and non-adversary problems This is another distinction between problems.   An adversary problem is one in which the problem solver is competing with a thinking opponent, or a seemingly thinking opponent, like a chess-playing computer.   In non-adversary problems the battle goes between a thinking problem solver and inert problem features.   The latter may be symbolic or real, but they do not react to what the problem solver does, in order to â€Å"defeat† him, and they do not care about what the human problem solver feels.   Semantically rich and semantically impoverished problems This distinction seems to be increasing in importance.   It was elaborated by Chi and his coworkers (1982).   A problem is semantically rich for the problem solver who brings a significant relevant knowledge to the problem.   The opposite is true of semantically impoverished problems.   As an example, consider a problem given to two problem solvers.   For the domain expert it is a semantically rich problem, for the novice it is a semantically impoverished problem.   This distinction thus expresses the problem-solvers view of the problem situation, or Shaws familiarity within the population. Most puzzles, IQ-tests, and the like, are semantically impoverished for most subjects.   Much of psychological research has been focused on solving semantically impoverished puzzles of the non-adversary type.   The semantically rich non-adversary tasks are increasing in importance.   This category comprises most tasks in computer programming and in physics.  Ã‚  Ã‚  

Friday, September 20, 2019

Aspects Of Database Security Information Technology Essay

Aspects Of Database Security Information Technology Essay Many native methods of providing Database security have also been discussed along with a survey of database threats issues and its remedies. Mechanisms are discussed that propose strengthening the database security. It seems desirable to get an understanding of the complete set of security problems faced and their problems up-to-date to devise better methodologies for database security issues. The research study regarding Database Security is organized as follows: Section 1 highlights the native methods of Database Security which have been employed. Section 2 describes the threats faced by databases and Section 3 discusses varies proposed remedies to the Database security issues. Improper safeguarding of data might compromise database confidentiality, its availability and integrity. In order to prevent this, it is very important to form a comprehensive database security concept [term paper link]. Importance of Data The security of data has always been an issue, but with the increase of applications relying more on databases to store that information, the threats to the security have increased manifold. Security of data is a crucial issue today then ever and the importance of it is clearly understood as well. The three main objectives of Database security include Confidentiality, Integrity and Availability [1]. The databases have to be secured in any case since they contain bulk amount of data both confidential and public. The loss of integrity of data can not only have disastrous affect for a specific user, but the reputation of the whole organization comes at stake. Methods to perturb original data and are required in which data is converted to some anonymous form, in cases where the privacy of data itself is of utmost importance. Anonymization in that case is carried out in such a way that the original data integrity and its relationships are maintained while the data is perturbed for analysi s. Threats to Database Databases today face a growing risk of threats and vulnerabilities. Security breaches are typically categorized as unauthorized data observation, incorrect data modification, and data unavailability. Unauthorized data observation results in the disclosure of information to users not entitled to gain access to such information [2]. In case of unauthorized data observation, the data is seen by users for whom that data in not intended. For incorrect data modifications, once the data in the databases is modified, its integrity is lost and then the proper usage of data cannot be carried out. The true information is not available when it is needed. Countermeasures to Threats Some countermeasures that can be employed are outlined below: Access Controls (can be Discretionary or Mandatory) Authorization (granting legitimate access rights) Authentication (determining whether a user is who they claim to be)ÂÂ   Backup Journaling (maintaining a log file enables easy recovery of changes) Encryption (encoding data using an encryption algorithm) RAID (Redundant Array of Independent Disks protects against data loss due to disk failure)ÂÂ   Polyinstantiation (data objects that appear to have different values to users with different access rights / clearance)ÂÂ   Views (virtual relations which can limit the data viewable by certain users) [3]. Security Solutions for Databases To protect data from losing its confidentiality, integrity and availability, different mechanisms have been proposed and are currently in use by the Relational Database Management Systems. The protection mechanisms used to provide security to databases include Firewalls which act as the first line of defense. Intrusion Detection Systems are another form of security which detects intrusions in the database. Achieving high security for databases is a continuous and tough job. Data in the databases has to be secure so that no loss, leakage or unwanted access to it is made. The database security model is structured using the Access Control policy, authorization policy, inference policy, accountability policy, audit policy, and consistency policy [5]. The Access Control Policy for security of databases is focused with some research on the other mechanisms of security as well including Authentication, Inference avoidance, different levels of access control and the protection of data itsel f. 4.1 Access Control Policy: The access control system is the database components that checks all database requests and grants or denies a users re-quest based on his or her privileges. (Here we assume that the user has been authenticated.) [6] Discretionary Access Control in RDBMS Mandatory Access Control in RDBMS Discretionary Mechanism in OODBMS Discretionary Mechanism in OODBMS One of the main mechanisms to secure databases is the access control mechanism. In this regard the assurance that access is granted to authorize users has to be made to avoid compromising the security of the database. Some of the access control methods that are used are discussed, but the list is not exhaustive. Existing solutions for database security, which are defined for Relational Database Management Systems, are not appropriate for Object Oriented Database Management Systems. This is because OODBMSs are different in terms of the security models they follow. They are richer than the ordinary relational data models. This mainly refers to the authorization principles they follow. So either the relational data models have to be extended to incorporate the object oriented concepts as well or new data models have to be created for the object oriented data models. Object models provide a superset of the functionalities of relational database management system [5]. Discretionary Access Control In this case, the creator of an object becomes its owner and he has the full right over that object. The owner here then defines the rights to access the information. Mandatory Access Control Objects in this case are assigned labels, on the basis of which they have the right to access the information in a database. The security labels assigned could be top secret, secret, classified, unclassified. In this case, the system itself mandates the users their rights to access or modify data. Discretionary Access Control in OODBMS In case of object oriented database architecture, objects are stored in the database as compared to the relational database architecture in which strings, values or integers are stored instead. The objects have attributes as well as methods which are invoked to query data from the database. Mandatory Access Control in OODBMS In case of mandatory access control, the data in the databases are discussed in which are used the methods Inference Issue Avoidance In cases where legitimate data is accessed by the user through queries, it is a risk that he infers further information which is not concerned to him. In such cases the security of user data is compromised. Data Privacy Protection The user data becomes identifiable when paired with some existing information. Some mechanism has to be adopted that prevents leakage of confidential information from data that is publicly available. In this regards the process of data-anonymization is used which de-identifies the information for privacy preservation. Even with the technique of Anonymization, the inference problem still remains in the data mining field. Even though a database is sanitized by removing private information, the use of data mining techniques may allow one to recover the removed information. Several approaches have been proposed, some of which are specialized for specific data mining techniques, such as tools for association rule mining or classification systems, whereas others are independent from the specific data mining technique. In general, all approaches are based on modifying or perturbing the data in some way [2]. Security in Distributed Databases Some of the most, important security requirements for database management systems are: Multi-Level Access Control: Confidentiality, Reliability, Integrity, and Recovery [8]. Data mining systems are being extended to function in a distributed environment. These systems are called distributed data mining systems. Security problems may be exacerbated in distributed data mining systems [8]. Conclusion

Thursday, September 19, 2019

Blame :: essays research papers

  Ã‚  Ã‚  Ã‚  Ã‚  Are some people more to blame for a crime then others and if so why? This is a question which many people wonder about today. I think the answer is yes. People who are brought up in a certain way are more likely to commit a certain crime than others. In he following I will consider why certain people are more to blame then others for the crimes that they commit.   Ã‚  Ã‚  Ã‚  Ã‚  Before looking at the issue of if some people are more to blame than others we must first look the reasons in which people may commit crimes and the type of crimes. There are a variety of reasons for a person to commit a crime including greed, to be famous, need for money, pure hate, and insanity. The crimes in which they commit range from murder, robberies, or rape.   Ã‚  Ã‚  Ã‚  Ã‚  After looking at reasons why and the types of crimes it is now possible to look at the larger issue at hand. If a person is poor and they are performing a robbery to get some money to feed their baby should they be more to blame than someone who is rich but performing the same robbery because they are greedy. There is no right answer to this but I think that the person who is robbing the store to help his kid is less to blame. I say this because even though the person is poor it is not always his fault. He may not be able to get money for his baby but would still feels the need to provide for it. This is what forces him to rob the store. I feel people should look at him with a bit of compassion because the reason that he was committing the crime was not a selfish one but one that benefits others. On the other hand the rich guy who robbed the store cause he was greedy should be help more accountable for his crime. Since he is rich and did not need the money and onl y committed the crime because of his own selfishness he is more to blame.   Ã‚  Ã‚  Ã‚  Ã‚  You might say why should the blame be divided differently between the two people if in fact they did commit the same crime. Now it is true that they are both supposed to be equal but are they really truly equal. How can we say that a poor person is equal to a rich one?