News Bias Identification: Critical Analysis

Topics:
Essay type:
Words:
2489
Pages:
5
This essay sample was donated by a student to help the academic community. Papers provided by EduBirdie writers usually outdo students' samples.

Cite this essay cite-image

Abstract:

Nowadays several humans square measure longing for fake news-associated drawbacks. The individuals square measure simply victim for over many faux info like social media, newspaper,etc. personalities will now not perceive whether or not the new is pretend or real. Bias news is the biggest drawback in our day-to-day life. it's acceptable to possess a system to modify users to access balanced news content to work out whether or not a brand new article is faux or biased. Most existing ways of faux news detection square measure supervised that needed Associate in Nursing huge quantity of your time and labor to construct dependably and notated dataset. It normally focuses on disputed topics(e.g marriage, gay)tend to reveal a one-of-a-kind style of bias during a range of stories agencies.In existing machines they center of attention on solely the precise topics, therefore, people now not get the right output of demandable enter and to boot having a way less accuracy of their system, therefore to beat this drawback we have a tendency to develop a system to spot the knowledge is pretend or actual through creating use of some algorithmic rule like Na_ve's Thomas Bayes, call Tree, Random Forest then forth on that,We used education dataset to teach statistics and check dataset to reinforce the accuracy. conjointly furnish chance of the news. in order that it will increase the accuracy of our machine

Keywords: media bias, text mining sentiment analysis, machine learning, news domain, SVM, accuracy, recall and f-measure.

Save your time!
We can take care of your essay
  • Proper editing and formatting
  • Free revision, title page, and bibliography
  • Flexible prices and money-back guarantee
Place an order
document

I. Introduction

The rise of social media has democratized content creation and has made it easy for everybody to share and spread information online. On the positive side, this has given rise to citizen journalism, thus enabling much faster dissemination of information compared to what was possible with newspapers, radio, and TV. On the negative side, stripping traditional media from their gate-keeping role has left the public unprotected against the spread of misinformation, which could now travel at breaking-news speed over the same democratic channel. This has given rise to the proliferation of false information that is typically created either to attract network traffic and gain financially from showing online advertisements, e.g., as is the case of clickbait, or to affect individual people’s beliefs, and ultimately to influence major events such as political elections. There are strong indications that false information was weaponized at an unprecedented scale during the 2016 U.S. presidential campaign. Naturally, limiting the sharing of “fake news” is a major focus for social media such as Facebook and Twitter. Additional efforts to combat “fake news” have been led by fact-checking organizations such as Snopes, FactCheck and Politifact, which manually verify claims. Unfortunately, this is inefficient for several reasons. First, manual fact-checking is slow and debunking false information comes too late to have any significant impact. At the same time, automatic fact-checking lags behind in terms of accuracy, and it is generally not trusted by human users. In fact, even when done by reputable fact-checking organizations, debunking does little to convince those who already believe in false information. A third, and arguably more promising, way to fight “fake news” is to focus on their source. While “fake news” are spreading primarily on social media, they still need a “home”, i.e., a website where they would be posted. Thus, if a website is known to have published non-factual information in the past, it is likely to do so in the future. Verifying the reliability of the source of information is one of the basic tools that journalists in traditional media use to verify the information. Our system detect the fake news present in the data spread on a social network such as Facebook, Twitter and other resources. The machine learning concept is used to find such type of fake news or bias in news.

II. Literature survey

The study of biased news reporting has a long tradition in the social sciences going back at least to the 1950s. In the classical definition of Williams, media bias must both be intentional, i.e., reflect a conscious act or choice, and it must be sustained, i.e., represent a systematic tendency rather than an isolated incident. In this article, we thus focus on intentional media bias, which journalists and other involved parties implement purposely to achieve a specific goal. This definition sets the media bias that we consider apart from other sources of unintentional bias in news coverage. Sources of unintentional bias include the influence of news values throughout the production of news, and later the news consumption by readers with different backgrounds. Examples for news values include the geographic vicinity of a newsworthy event to the location of the news outlet and consumers or the effects of the general visibility or societal relevance of a specific topic.

Various definitions of media bias and its specific forms exist, each depending on the particular context and research questions studied. Mullainathan and Shleifer define two high-level types of media bias concerned with the intention of news outlets when writing articles: ideology and spin Ideological bias is present if an outlet biases articles to promote a specific opinion on a topic. Spin bias is present if the outlet attempts to create a memorable story. The second definition of media bias that is commonly used distinguishes between three types: coverage, gatekeeping, and statement (cf. ). Coverage bias is concerned with the visibility of topics or entities, such as a person or country, in media coverage. Gatekeeping bias, also called selection bias or agenda bias, relates to which stories media outlets select or reject for reporting. Statement bias, also called presentation bias, is concerned with how articles choose to report on concepts. For example, in the US elections, a well-observed bias arises from an editorial slant, in which the editorial position on a given presidential candidate affects the quantity and tone of a newspaper’s coverage. Further forms of media bias can be found in the extensive discussion by D’Alessio and Allen.

III. Methodology

A. Naive Bayes

As a Naive Bayes Classifier, because it is simple, it is handy to compute it has a greater pace for a large quantity of coaching data, a great deal much less sensitive to missing data, the algorithm is also alternatively simple, frequently used for textual content material classification, which is the essential assumption in Naïve Bayes Classifier. It is a conditional chance model: given a hassle occasion to be classified, represented via the use of a vector x = (x1,......, xn) representing some n facets (independent variables), it assigns to this occasion possibilities p(Ckj x1,......, xn) for every of K viable results or classes. The hassle with the above formula is that if the range of factors n is big or if an attribute can take on a massive range of values, then basing such a mannequin on hazard tables is infeasible. We, therefore, reformulate the model to make it greater tractable. Using Bayes’ theorem, the conditional threat can be decomposed as: p(Ck j x) = (p(Ck) p(x j Ck) ) / p(x). So, assuming that for i = 1, : : : .. n-1. So, beneath the independence assumptions, we can say that p(Ckj x1,......, xn) =1/Z p(Ck) i=1_ n p(xij ck) Where the proof is a scaling thing based entirely on, that is, a constant if the values of the function variables are known.

B.Support Vector Machine

In Machine learning, support vector machines (SVM) are supervised learning fashions with related learning algorithms that analyze records used for regression analysis. An SVM model is an illustration of the examples as points in space, mapped so that the examples of the separate classes are divided through a clear hole that is as large as possible. Support Vector Machine is a powerful algorithm used, In high dimensional space Support Vector Machine are very effective, Can deal with any data type via altering the kernel.

C.Random Forest

Random forests or random selection forests are an ensemble gaining knowledge of approach for classification, regression and different duties that operates by means of setting up a multitude of selection trees at education time and outputting the type that is the mode of the lessons (classification) or imply prediction (regression) of the individual trees. Random decision forests correct for choice trees’ addiction of overfitting to their training set. The first algorithm for random choice forests was created through Tin Kam Ho the use of the random subspace method, which, in Ho’s formulation, is a way to put into effect the 'stochastic discrimination' approach to the classification proposed by using Eugene Kleinberg.

Decision tree are a popular method for a variety of computer mastering tasks. Tree mastering 'comes closest to meeting the necessities for serving as an off-the-shelf system for records mining', say due to the fact it is invariant under scaling and a variety of different transformations of feature values, is sturdy to the inclusion of irrelevant aspects and produces inspectable models. However, they are seldom accurate.

D. Decision Tree

A decision tree is a shape that consists of a root node, branches, and leaf nodes. Each interior node denotes a test on an attribute, each branch denotes the outcome of a test, and each leaf node holds a classification label. The topmost node in the tree is the root node. The decision tree algorithm falls beneath the class of supervised learning. They can be used to remedy each regression and classification problem. The decision tree makes use of the tree representation to solve the trouble in which every leaf node corresponds to a type label and attributes are represented on the inside node of the tree. The benefits of having a selection tree are as follows:

  1. It no longer requires any domain knowledge.
  2. It is convenient to comprehend.
  3. The learning and classification steps of a decision tree are simple and fast.

IV. Proposed system

We have followed text mining techniques and machine learning in an effort to detect bias in news agencies. We have crawled news articles from seven major outlets in the western media. Then we have made preprocessing to convert them into a useful structured form, building sentiment classifiers that

be able to predict article bias. The different machine learning algorithms are used to predict whether the news is faulty or real. Training data is the first trained using machine learning algorithms and save as a .csv file into the folder. Feature selection, Data processing is the first step in the training part of the project. Different type of classifier is used for news prediction such as naive Bayes, random forest, svm classifier. There are two actors who handle it one is the admin and the second is a system. The admin can provide the training data, and testing data with proper data processing to the system. System contains one or more algorithms like decision tree, Random forest, Naive Bayes, etc. this algorithm will generate a classi_er using training data and admin applied testing data on that classi_er and it will generate the output that the given new is fake or not.

VI. System architecture

A.Data Preparation

In this class, both training data and testing data is prepared in the format which is required for machine learning algorithm coding. Data Observation is done first on training data and testing data which syntax formation. Then create the distribution of data in label and data file format.

The data quality check operation is performed to check the equivalent quality of data. Tokenization of data is done to prepare data is ready for the next process. while using data preparation we used two approaches:

1. Stop word removal

Stop words are insignificant words in a language that will create noise. These words are commonly used in a sentence.we removed common words such as a,as,about,an,are,at,the,by,for,from,how,in,is,of,on,

or,that,the,these,this,too,was,what,when,where,who,

will,etc.these word were removed from each document and the process document are passed to the next step.

2. Stemming

After the stop word removal, we used stemming for data standardization it changes the word into its original form.

B. Classification

Multiple classification algorithm tools are used for news bias prediction such as random forest, naive Bayes, and decision tree. This classifier work on training and testing data with multiple process definition.

C. Prediction

Prediction of news bias is done using training and testing data. Testing data is provided to find the actual bias in news with the help of training data.

III. Conclusion

Our analysis results have proven that the majority of those options have an awesome effect on performance, with the articles from the target website, its Wikipedia page, and its Twitter account being the principal integral (in this order). We tend to any performed accomplice ablation study of the have an impact on of the man or woman varieties of picks for every task, that can also provide standard directions for future analysis. In future work, we have a tendency to conceive to tackle the project as ordinal regression, and any to model the inter-dependencies between factualness and bias in a highly joint model. we tend to additionally are curious about characterizing the factualness of information for media in choice languages. records is processed mistreatment in more than one algorithm with variation in accuracy of the system. this approach locates the actual information is fraud or not with accuracy in percentage as a result user gets the idea related to machine output response.

IV. References

  1. Vinod Nair and Geoffrey E Hinton.” Rectified linear units improve restricted Boltzmann machines”.In Proceedings of the 27th international conference on machine learning (ICML-10). pages 807–814(2010).
  2. Anish Anil Patankar, Joy Bose. “Bias Based Navigation for News Articles and Media”. In Proc. NLDB, pp 465-470(2016).
  3. Sailesh Kumar Sathish, Anish Anil Patankar, Nirmesh Neema. “Semantics-Based Browsing Using Latent Topic Warped Indexes”. InProc. Tenth International Conference on Semantic Computing (ICSC), IEEE(2016).
  4. Atkinson, M., and E. Van der Goot 'Near real time information mining in multilingual news.'In: Proceedings of the 18th international conference on World Wide Web(pp. 1153–1154.ACM(2009).
  5. Bernhardt, D., S. Krasa, and M. Polborn 'Political polarization and the electoral effects of media bias.' In: Journal of Public Economics, 92 (5),1092–1104(2008).
  6. H. C. Carneiro, F. M. Franc¸a, and P. M. Lima, “Multilingual part-of-speech tagging with weightless neural networks,” Neural Networks,vol. 66, pp. 11–21, (2015).
  7. Aleksander, W. Thomas, and P. Bowden, “Wisard: a radical step forward in image recognition,” Sensor review, vol. 4, no. 3, pp. 120–124,(1984).
  8. Rubin, V.L., Chen, Y., Conroy, N.J.:' Deception detection for news: three types of fakes.' In:Proceedings of the 78th ASIS&T Annual Meeting: Information Science with Impact:Research in and for the Community (ASIST 2015). Article 83, p. 4, American Society for Information Science, Silver Springs (2015)
  9. Schulten, K.: Skills and Strategies—'Fake News vs. Real News: Determining the Reliability of Sources'. The Learning Network (2017).
  10. Hunt Allcott and Matthew Gentzkow.'Social media and fake news in the 2016 election'. Technical Report. National Bureau of Economic Research(2017).
Make sure you submit a unique essay

Our writers will provide you with an essay sample written from scratch: any topic, any deadline, any instructions.

Cite this paper

News Bias Identification: Critical Analysis. (2022, September 27). Edubirdie. Retrieved December 22, 2024, from https://edubirdie.com/examples/news-bias-identification-critical-analysis/
“News Bias Identification: Critical Analysis.” Edubirdie, 27 Sept. 2022, edubirdie.com/examples/news-bias-identification-critical-analysis/
News Bias Identification: Critical Analysis. [online]. Available at: <https://edubirdie.com/examples/news-bias-identification-critical-analysis/> [Accessed 22 Dec. 2024].
News Bias Identification: Critical Analysis [Internet]. Edubirdie. 2022 Sept 27 [cited 2024 Dec 22]. Available from: https://edubirdie.com/examples/news-bias-identification-critical-analysis/
copy

Join our 150k of happy users

  • Get original paper written according to your instructions
  • Save time for what matters most
Place an order

Fair Use Policy

EduBirdie considers academic integrity to be the essential part of the learning process and does not support any violation of the academic standards. Should you have any questions regarding our Fair Use Policy or become aware of any violations, please do not hesitate to contact us via support@edubirdie.com.

Check it out!
close
search Stuck on your essay?

We are here 24/7 to write your paper in as fast as 3 hours.