วันอาทิตย์ที่ 23 พฤศจิกายน พ.ศ. 2551

Copyright


Copyright is a legal concept, enacted by governments, giving the creator of an original work of authorship exclusive rights to control its distribution for a certain time period, after which the work enters the public domain. Generally, it is "the right to copy", but usually provides the author with other rights as well, such as the right to be credited for the work, to determine who may adapt the work to other forms, who may perform the work, who may financially benefit from it, and other, related rights. It is an intellectual property form (like the patent, the trademark, and the trade secret) applicable to any expressible form of an idea or information that is substantive and discrete. Copyright was initially conceived as a way for governments in Europe to restrict printing; the contemporary intent of copyright is to promote the creation of new works by giving authors control of and profit from them.

Copyright has been internationally standardized, lasting between fifty to a hundred years from the author's death, or a finite period for anonymous or corporate authorship; some jurisdictions have required formalities to establishing copyright, most recognize copyright in any completed work, without formal registration. Generally, copyright is enforced as a civil matter, though some jurisdictions do apply criminal sanctions.

Most jurisdictions recognize copyright limitations, allowing "fair" exceptions to the author's exclusivity of copyright, and giving users certain rights. The development of the Internet, digital media, computer network technologies, such as peer-to-peer filesharing, have prompted reinterpretation of these exceptions, introduced new difficulties in enforcing copyright, and inspired additional challenges to copyright law's philosophic basis. Simultaneously, businesses with great economic dependence upon copyright have advocated the extension and expansion of their copy rights, and sought additional legal and technological enforcement.

http://en.wikipedia.org/wiki/Copyrights


วันอาทิตย์ที่ 9 พฤศจิกายน พ.ศ. 2551

Information system (IS)

The term information system (IS) sometimes refers to a system of persons, data records and activities that process the data and information in an organization, and it includes the organization's manual and automated processes. Computer-based information systems are the field of study for information technology, elements of which are sometimes called an "information system" as well, a usage some consider to be incorrect.

Areas of work

Information Systems has a number of different areas of work:

  • Information Systems Strategy
  • Information Systems Management
  • Information Systems Development

Each of which branches out into a number of sub disciplines, that overlap with other science and managerial disciplines such as computer science, pure and engineering sciences, social and behavioral sciences, and business management.

There are a wide variety of career paths in the information systems discipline. "Workers with specialized technical knowledge and strong communications skills will have the best prospects. People with management skills and an understanding of business practices and principles will have excellent opportunities, as companies are increasingly looking to technology to drive their revenue."

http://en.wikipedia.org/wiki/Information_system

Knowledge Management (KM)

Knowledge Management (KM) comprises a range of practices used in an organisation to identify, create, represent, distribute and enable adoption of what it knows, and how it knows it. It has been an established discipline since 1995 [1] with a body of courses in universities to include business administration, information systems, management, and library and information sciences. More recently, other schools, to include those focused on information and media, computer science, public health, and public policy, also have started to contribute. Many large companies and non-profit organisations have resources dedicated to internal KM efforts, often as a part of their 'Business Strategy', 'Information Technology', or 'Human Resource Management' departments. Several consulting companies also exist that provide strategy and advice regarding KM to these organisations.

KM efforts typically focus on organisational objectives such as improved performance, competitive advantage, innovation, developmental processes, the sharing of lessons learned, and continuous improvement of the organisation. KM efforts overlap with Organisational Learning, and may be distinguished from by a greater focus on the management of knowledge as a strategic asset and a focus on encouraging the exchange of knowledge. KM efforts can help individuals and groups to share valuable organisational insights, to reduce redundant work, to avoid 're-inventing the wheel' per se, to reduce training time for new employees, to retain intellectual capital as employees turnover in an organisation, and to adapt to changing environments and markets.

http://en.wikipedia.org/wiki/Knowledge_management


วันอาทิตย์ที่ 2 พฤศจิกายน พ.ศ. 2551

computer history

The development of the modern day computer was the result of advances in technologies and man's need to quantify. Papyrus helped early man to record language and numbers. The abacus was one of the first counting machines. . Some of the earlier mechanical counting machines lacked the technology to make the design work. For instance, some had parts made of wood prior to metal manipulation and manufacturing. Imagine the wear on wooden gears. This history of computers site includes the names of early pioneers of math and computing and links to related sites about the History of Computers, for further study. This site would be a good Web adjunct to accompany any book on the History of Computers or Introduction to Computers. The "H" Section includes a link to the History of the Web Beginning at CERN which includes Bibliography and Related Links. Hitmill.com strives to always include related links for a broader educational experience.

http://www.hitmill.com/computers/computerhx1.html

วันอาทิตย์ที่ 26 ตุลาคม พ.ศ. 2551

Evaluation

Evaluation is systematic determination of merit, worth, and significance of something or someone using criteria against a set of standards. Evaluation often is used to characterize and appraise subjects of interest in a wide range of human enterprises, including the arts, criminal justice, foundations and non-profit organizations, government, health care, and other human services.

Evaluation standards and meta-evaluation
Depending on the topic of interest, there are professional groups which look to the quality and rigor of the evaluation process. One guiding principle within the U.S. evaluation community, energetically supported by Michael Quinn-Patton has been that evaluations be useful.
Furthermore, the international organizations such as the I.M.F. and the World Bank have independent evaluation functions. The various funds, programmes, and agencies of the United Nations has a mix of independent, semi-independent and self-evaluation functions, which have organized themselves as a system-wide UN Evaluation Group (UNEG), that works together to strengthen the function, and to establish UN norms and standards for evaluation. There is also an evaluation group within the OECD-DAC, which endeavors to improve development evaluation standards.
The Joint Committee on Standards for Educational Evaluation has developed standards for educational programmes, personnel, and student evaluation. The Joint Committee standards are broken into four sections: Utility, Feasibility, Propriety, and Accuracy. Various European institutions have also prepared their own standards, more or less related to those produced by the Joint Committee. They provide guidelines about basing value judgments on systematic inquiry, evaluator competence and integrity, respect for people, and regard for the general and public welfare.
The American Evaluation Association has created a set of Guiding Principles for evaluators. The order of these principles does not imply priority among them; priority will vary by situation and evaluator role. The principles run as follows:
- Systematic Inquiry: Evaluators conduct systematic, data-based inquiries about whatever is being evaluated.
- Competence: Evaluators provide competent performance to stakeholders.
- Integrity / Honesty: Evaluators ensure the honesty and integrity of the entire evaluation process.
- Respect for People: Evaluators respect the security, dignity and self-worth of the respondents, program participants, clients, and other stakeholders with whom they interact.
- Responsibilities for General and Public Welfare: Evaluators articulate and take into account the diversity of interests and values that may be related to the general and public welfare.

Source link

วันจันทร์ที่ 20 ตุลาคม พ.ศ. 2551

Information literacy

Several conceptions and definitions of information literacy have become prevalent. For example, one conception defines information literacy in terms of a set of competencies that an informed citizen of an information society ought to possess to participate intelligently and actively in that society (from [1]).
The American Library Association's (ALA) Presidential Committee on Information Literacy, Final Report states that, "To be information literate, a person must be able to recognize when information is needed and have the ability to locate, evaluate, and use effectively the needed information" (1989).
Jeremy Shapiro & Shelley Hughes (1996) define information literacy as "A new liberal art that extends from knowing how to use computers and access information to critical reflection on the nature of information itself its technical infrastructure and its social, cultural, and philosophical context and impact." (from [2])
Information literacy is becoming a more important part of K-12 education. It is also a vital part of university-level education (Association of College Research Libraries, 2007). In our information-centric world, students must develop skills early on so they are prepared for post-secondary opportunities whether that be the workplace or in pursuit of education.

History of the concept

A seminal event in the development of the concept of information literacy was the establishment of the American Library Association's Presidential Committee on Information Literacy whose final report outlined the importance of the concept. The concept of information literacy built upon and expanded the decades-long efforts of librarians to help their users learn about and how to utilize research tools (e.g., periodical indexes) and materials in their own libraries. Librarians wanted users to be able to transfer and apply this knowledge to new environments and to research tools that were new to them. Information literacy expands this effort beyond libraries and librarians, and focuses on the learner, rather than the teacher (Grassian, 2004; Grassian and Kaplowitz, 2001, pp.14-20).
Other important events include:
- 1974: The related term ‘Information Skills’ was first introduced in 1974 by Zurkowski to refer to people who are able to solve their information problems by using relevant information sources and applying relevant technology (Zurkowski, 1974).
- 1983: A Nation at Risk: The Imperative for Education Reform
shows that we are "raising a new generation of Americans that is scientifically and technologically illiterate."
- 1986: Educating Students to Think: The Role of the School Library Media Program
outlines the roles of the library and the information resources in K-12 education
1987: Information Skills for an Information Society: A Review of Research
includes library skills and computer skills in the definition of information literacy
- 1988: Information Power: Guidelines for School Library Media Programs
- 1989: National Forum on Information Literacy (NFIL), a coalition of more than 90national and international organizations, has its first meeting
- 1998: Information Power: Building Partnerships for Learning
Emphasizes that the mission of the school library media program is "to ensure that students and staff are effective users of ideas and information."
Specific aspects of information literacy (Shapiro and Hughes, 1996)
Tool literacy, or the ability to understand and use the practical and conceptual tools of current information technology relevant to education and the areas of work and professional life that the individual expects to inhabit.
Resource literacy, or the ability to understand the form, format, location and access methods of information resources, especially daily expanding networked information resources.
Social-structural literacy, or knowing that and how information is socially situated and produced.
Research literacy, or the ability to understand and use the IT-based tools relevant to the work of today's researcher and scholar.
Publishing literacy, or the ability to format and publish research and ideas electronically, in textual and multimedia forms (including via World Wide Web, electronic mail and distribution lists, and CD-ROMs).
Emerging technology literacy, or the ability to ongoing adapt to, understand, evaluate and make use of the continually emerging innovations in information technology so as not to be a prisoner of prior tools and resources, and to make intelligent decisions about the adoption of new ones.
Critical literacy, or the ability to evaluate critically the intellectual, human and social strengths and weaknesses, potentials and limits, benefits and costs of information technologies. Ira Shor defines critical literacy as habits of thought, reading, writing, and speaking which go beneath surface meaning, first impressions, dominant myths, official pronouncements, traditional clichés, received wisdom, and mere opinions, to understand the deep meaning, root causes, social context, ideology, and personal consequences of any action, event, object, process, organization, experience, text, subject matter, policy, mass media, or discourse.

Educational schemata

One view of the components of information literacy. Based on the Big6 by Mike Eisenberg and Bob Berkowitz.
1. The first step in the Information Literacy strategy is to clarify and understand the requirements of the problem or task for which information is sought. Basic questions asked at this stage:
What is known about the topic?
What information is needed?
Where can the information be found?
2. Locating: The second step is to identify sources of information and to find those resources. Depending upon the task, sources that will be helpful may vary. Sources may include: books; encyclopedias; maps; almanacs; etc. Sources may be in electronic, print, social bookmarking tools, or other formats.
3. Selecting/analyzing: Step three involves examining the resources that were found. The information must be determined to be useful or not useful in solving the problem. The useful resources are selected and the inappropriate resources are rejected.
4. Organizing/synthesizing: It is in the fourth step this information which has been selected is organized and processed so that knowledge and solutions are developed. Examples of basic steps in this stage are:
4.1 Discriminating between fact and opinion
4.2 Basing comparisons on similar characteristics
4.3 Noticing various interpretations of data
4.4 Finding more information if needed
4.5 Organizing ideas and information logically
5. Creating/presenting: In step five the information or solution is presented to the appropriate audience in an appropriate format. A paper is written. A presentation is made. Drawings, illustrations, and graphs are presented.
6. Evaluating: The final step in the Information Literacy strategy involves the critical evaluation of the completion of the task or the new understanding of the concept. Was the problem solved? Was new knowledge found? What could have been done differently? What was done well?

Search Engine

A Web search engine is a search engine designed to search for information on the World Wide Web. Information may consist of web pages, images, information and other types of files. Some search engines also mine data available in newsbooks, databases, or open directories. Unlike Web directories, which are maintained by human editors, search engines operate algorithmically or are a mixture of algorithmic and human input.

History

Before there were search engines there was a complete list of all webservers. The list was edited by Tim Berners-Lee and hosted on the CERN webserver. One historical snapshot from 1992 remains.[1] As more and more webservers went online the central list could not keep up. On the NCSA Site new servers were announced under the title "What's New!" but no complete listing existed any more.[2]
The very first tool used for searching on the (pre-web) Internet was Archie.[3] The name stands for "archive" without the "v." It was created in 1990 by Alan Emtage, a student at McGill University in Montreal. The program downloaded the directory listings of all the files located on public anonymous FTP (File Transfer Protocol) sites, creating a searchable database of file names; however, Archie did not index the contents of these sites.
The rise of Gopher (created in 1991 by Mark McCahill at the University of Minnesota) led to two new search programs, Veronica and Jughead. Like Archie, they searched the file names and titles stored in Gopher index systems. Veronica (Very Easy Rodent-Oriented Net-wide Index to Computerized Archives) provided a keyword search of most Gopher menu titles in the entire Gopher listings. Jughead (Jonzy's Universal Gopher Hierarchy Excavation And Display) was a tool for obtaining menu information from specific Gopher servers. While the name of the search engine "Archie" was not a reference to the Archie comic book series, "Veronica" and "Jughead" are characters in the series, thus referencing their predecessor.
The first Web search engine was Wandex, a now-defunct index collected by the World Wide Web Wanderer, a web crawler developed by Matthew Gray at MIT in 1993. Another very early search engine, Aliweb, also appeared in 1993. JumpStation (released in early 1994) used a crawler to find web pages for searching, but search was limited to the title of web pages only. One of the first "full text" crawler-based search engines was WebCrawler, which came out in 1994. Unlike its predecessors, it let users search for any word in any webpage, which became the standard for all major search engines since. It was also the first one to be widely known by the public. Also in 1994 Lycos (which started at Carnegie Mellon University) was launched, and became a major commercial endeavor.
Soon after, many search engines appeared and vied for popularity. These included Magellan, Excite, Infoseek, Inktomi, Northern Light, and AltaVista. Yahoo! was among the most popular ways for people to find web pages of interest, but its search function operated on its web directory, rather than full-text copies of web pages. Information seekers could also browse the directory instead of doing a keyword-based search.
In 1996, Netscape was looking to give a single search engine an exclusive deal to be their featured search engine. There was so much interest that instead a deal was struck with Netscape by 5 of the major search engines, where for $5Million per year each search engine would be in a rotation on the Netscape search engine page. These five engines were: Yahoo!, Magellan, Lycos, Infoseek and Excite.
Search engines were also known as some of the brightest stars in the Internet investing frenzy that occurred in the late 1990s.[4] Several companies entered the market spectacularly, receiving record gains during their initial public offerings. Some have taken down their public search engine, and are marketing enterprise-only editions, such as Northern Light. Many search engine companies were caught up in the dot-com bubble, a speculation-driven market boom that peaked in 1999 and ended in 2001.
Around 2000, the Google search engine rose to prominence.[citation needed] The company achieved better results for many searches with an innovation called PageRank. This iterative algorithm ranks web pages based on the number and PageRank of other web sites and pages that link there, on the premise that good or desirable pages are linked to more than others. Google also maintained a minimalist interface to its search engine. In contrast, many of its competitors embedded a search engine in a web portal.
By 2000, Yahoo was providing search services based on Inktomi's search engine. Yahoo! acquired Inktomi in 2002, and Overture (which owned AlltheWeb and AltaVista) in 2003. Yahoo! switched to Google's search engine until 2004, when it launched its own search engine based on the combined technologies of its acquisitions.
Microsoft first launched MSN Search (since re-branded Live Search) in the fall of 1998 using search results from Inktomi. In early 1999 the site began to display listings from Looksmart blended with results from Inktomi except for a short time in 1999 when results from AltaVista were used instead. In 2004, Microsoft began a transition to its own search technology, powered by its own web crawler (called msnbot).
As of late 2007, Google was by far the most popular Web search engine worldwide.[5] [6] A number of country-specific search engine companies have become prominent; for example Baidu is the most popular search engine in the People's Republic of China and guruji.com in India.

How Web search engines work

A search engine operates, in the following order
- Web crawling
- Indexing
- Searching

Web search engines work by storing information about many web pages, which they retrieve from the WWW itself. These pages are retrieved by a Web crawler (sometimes also known as a spider) — an automated Web browser which follows every link it sees. Exclusions can be made by the use of robots.txt. The contents of each page are then analyzed to determine how it should be indexed (for example, words are extracted from the titles, headings, or special fields called meta tags). Data about web pages are stored in an index database for use in later queries. Some search engines, such as Google, store all or part of the source page (referred to as a cache) as well as information about the web pages, whereas others, such as AltaVista, store every word of every page they find. This cached page always holds the actual search text since it is the one that was actually indexed, so it can be very useful when the content of the current page has been updated and the search terms are no longer in it. This problem might be considered to be a mild form of linkrot, and Google's handling of it increases usability by satisfying user expectations that the search terms will be on the returned webpage. This satisfies the principle of least astonishment since the user normally expects the search terms to be on the returned pages. Increased search relevance makes these cached pages very useful, even beyond the fact that they may contain data that may no longer be available elsewhere.
When a user enters a query into a search engine (typically by using key words), the engine examines its index and provides a listing of best-matching web pages according to its criteria, usually with a short summary containing the document's title and sometimes parts of the text. Most search engines support the use of the boolean operators AND, OR and NOT to further specify the search query. Some search engines provide an advanced feature called proximity search which allows users to define the distance between keywords.
The usefulness of a search engine depends on the relevance of the result set it gives back. While there may be millions of webpages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods to rank the results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another. The methods also change over time as Internet usage changes and new techniques evolve.
Most Web search engines are commercial ventures supported by advertising revenue and, as a result, some employ the practice of allowing advertisers to pay money to have their listings ranked higher in search results. Those search engines which do not accept money for their search engine results make money by running search related ads alongside the regular search engine results. The search engines make money every time someone clicks on one of these ads.
Revenue in the web search portals industry is projected to grow in 2008 by 13.4 percent, with broadband connections expected to rise by 15.1 percent. Between 2008 and 2012, industry revenue is projected to rise by 56 percent as Internet penetration still has some way to go to reach full saturation in American households. Furthermore, broadband services are projected to account for an ever increasing share of domestic Internet users, rising to 118.7 million by 2012, with an increasing share accounted for by fiber-optic and high speed cable lines.