Short on time?

Get essay writing help

Attitude Measurement And Issue

Topics:
Words: 2846
Pages: 6
This essay sample was donated by a student to help the academic community. Papers provided by EduBirdie writers usually outdo students' samples.

INTRODUCTION

Human attitudes are unique to each person. Attitudes are a group of thoughts, feelings, or actions that evaluates objects in either a positive or negative manner (Schwarz, 2015). For example, our opinions on climate change, beliefs on the side effect of vaccination in children, or as simple as the credibility of politicians (Branscombe & Baron, 2017). Among researchers, there are questions on to what extend do attitudes differ in their strength (Branscombe & Baron, 2017). Attitudes can be categorized into strong or weak attitudes, in which both of them are different in many ways (Bohner & Dickel, 2011a). Therefore, attitudes were assumed to be related to some sort of evaluation (Bohner & Dickel, 2011b). Plus, different types of measurement methods will give out different types of responses (Schwarz, 2015). To further understand attitudes evaluation, the researcher measures them using two ways which were explicit or implicit form.

ATTITUDE MEASUREMENT METHOD

Explicit (Direct) Measurement

Explicit or direct means of attitude measurement require the respondents to openly states their evaluation or beliefs. This method is a very classic approach to measure one's attitude. The researchers rely on the answer to direct questions such as “Do you support or do not support the current education syllabus?” (Schwarz, 2015). However, there is a risk of a biased answer, depending on each of the respondent's interests or constrictions that may apply to them (Traczyk & Zaleskiewicz, 2016). For example, the advertisement for Nike’s portrays the brands’ model which has fit body type and is usually handsome or beautiful. This kind of images will attract people who wish they are fit and handsome or beautiful. If a survey is conducted and respondent were asked to picked their favourite sports brand, it is most likely that Nike scored higher as they have already had the respondent’s interest since before (Branscombe & Baron, 2017).

Usually, explicit measurement implements the use Likert-scaling incorporated into the questionnaires or surveys. The measurement was first introduced by Rensis Likert, where the method is being used to replace the conventional methods that usually are very time consuming while collecting the self-reported data (Krabbe, 2017;Tullis & Albert, 2013). Likert scale presents a gradient of options in either positive or negative options, where respondents are to grade their opinion accordingly (Tullis & Albert, 2013). For instance, in a question of “Do you agree or disagree on interracial marriage?” the respondent are given a scale of one (1) to five (5), where they grade their opinion with one to be strongly disagree, five to be strongly agree and 3 to be neither agree or disagree. The implementation of Likert scale can be used in determining user’s gaming experience, determining the influence of customer satisfaction in consumer buying decision and many more (Sanders, 2016;Yang, Cheng, & Tong, 2015). Thus, this scale is totally centred to the subject where its main objective is to scale the respondents’ opinion, and not the item or object (Krabbe, 2017).

Another popular method in explicit measurement is Semantic Differential scale. Unlike Likert scale that gave choices of either agree or disagree to each questions, semantic differential scale uses a group of words that is polar opposite from each other for the particular topic (Tullis & Albert, 2013). For example, in a survey on a particular politic party on the Election Day, a series of words are presented to study the people’s attitudes in relating those words with the political party. In other word, semantic differential is a tool to measure the association of people attitudes towards the given object (Stoklasa, Talášek, & Stoklasová, 2019). There is also study that applies the use of semantic differential scale in the study of multiple aspects in mind perception, where emotions and intelligence are corresponded to experience and agency (Takahashi, Ban, & Asada, 2016).

However, there are pros and cons between the two methods in measuring attitudes. There is no guarantee that the respondent have some degree of understanding regarding the topic and they might also have tendency to answer based on an acceptable social biases (DePoy & Gitlin, 2016). In a report by Krabbe, (2017) the items of that particular topic presented through Likert scaling might have difference in importance, where one item might be more important than the other. On the other hand, there are reports that Semantic differential scale limits the range of responses (DePoy & Gitlin, 2016). Thus, Krabbe, (2017) states that the use of direct measurement method increases the susceptibility of the data to be bias. Regardless of that, the use of semantic differential scale is a better option in the explicit measurement as it reduces the tendency for social biases (Stoklasa et al., 2019).

Implicit (Indirect) Measurement

Human thoughts and actions are usually developed by external factors or processes that took place in an automatic manner (Branscombe & Baron, 2017). Since there is a concern on the disadvantages of explicit measurement, psychologist has developed an unobstructed, implicit measurement (Mitchell & Tetlock, 2015). Implicit measurement helps the researcher to understand the relation of attitudes to behaviour (Goodall, 2011). They hoped to be able to link and shorten the gap between self-reported method and behaviours (Meissner et.al, 2019). This is because, implicit measurement have a better prediction of validity when assessing a more sensitive topic for the people (Sargent & Newman, 2020). The implicit measurement mainly done based on the respondent’s speed and accuracy in responding to the stimuli (Kurdi et al., 2019). The example of implicit measurement method is the Implicit Association Test (IAT), Evaluative Priming (EP), Affect Misattribution Paradigm (AMP), as well as physiological measures that include Event Related Potentials (ERP) and Functional Magnetic Resonance Imaging (fMRI).

Implicit association test focuses on people’s silent thoughts, the thoughts where they might not want to or unable to report to the researcher (Branscombe & Baron, 2017), that is usually influenced by the social group beliefs and stereotypes (Kurdi et al., 2019). IAT lays the outline of the fact that people always associates or evaluates social objects in a positive or negative manner (Branscombe & Baron, 2017). Schimmack, (2019) classifies IAT as a simultaneous classification task. The respondent are required to classify a pair of different object into a particular category that is mutually exclusive (Schimmack, 2019). With this, the researcher evaluates the respondent’s automatic association of classifying both of the objects (Chevance et al., 2017). The test means of measurement is by measuring the time of responses when the respondent are evaluating or categorizing each object to their respective category (Schimmack, 2019). This evaluative conditioning helps the researcher to drive the respondents’ implicit belief when the topic’s specific trait is not present (Chevance et al., 2017).

Evaluative priming has been introduced way before the Implicit Association test, with the purpose of investigating automatic activation of attitudes (Koppehele-Gossel et al., 2020). Although the mechanism of evaluative priming is similar to IAT, evaluative priming resolves the instrumental concern that often being associated with IAT (Koppehele-Gossel et al., 2020). For example, a study done by (Lehnert et al., 2018) on the topic of language, where the respondent is given either positive or negative adjective after being given a prime stimulus. Based on the respondent’s response, the researcher are able to study the spontaneous and implicit object evaluation that is based on prior representation of the respondents’ memory (Lehnert et al., 2018).

Another example of implicit measurement is using the Affect Misattribution Procedure (AMP). This method has been reported as a very powerful technique to assess preferences or attitudes that people wish to conceal (Hazlett & Berinsky, 2018). It uses similar mechanism as other implicit measurement method, which respondent are presented with prime stimulus before being presented with a non-familiar target (Ross et al., 2020). The result of this method has shown that the affective and semantic judgement of the target are influenced by the prime stimulus (Ross et al., 2020). This method also have high magnitude of priming effect as well as high statistical reliability (Hazlett & Berinsky, 2018).

Save your time!
We can take care of your essay
  • Proper editing and formatting
  • Free revision, title page, and bibliography
  • Flexible prices and money-back guarantee
Place Order
document

The physiological measures of implicit measurement include Event Related Potentials (ERP) and Functional Magnetic Resonance Imaging (fMRI). These method of measuring attitudes are increasing in use, along with the advancement in the medical technology (Helfrich & Knight, 2019). The ERP is a method where voltage fluctuation is recorded through scalp when a time-locked event is induced to the respondent (Thigpen & Keil, 2016). As such, the event is usually presented before sensory-related operation, cognitive control operation, affective operation or memory-related operations is being introduced to the participants (Kropotov, 2016). Thus, ERP has the potential in precisely recording human perception and behaviour in non-traditional approaches (Helfrich & Knight, 2019). Whereas fMRI portrays brain structures that responded differently in a given situation or stimulus, which in turns provide a precise proof when measuring someone’s attitude or behaviour (Thigpen & Keil, 2016).

ISSUES IN ATTITUDE MEASUREMENT

There are two issues which are relevant in attitude measurement which is the reliability and validity of the measurement. Attitude measurement is designed to provide a valid, or accurate measure of an individual’s social attitude. There are a few reasons as to why adressing the issue of attitude measurement is importance. According to Alwin (1973), it is important to study the issue in attitude measurement because attitude concept presence everywhere in modern social sciences, extensive presence of attitude measures in explicit and implicit measures and the challenges measuring a construct known as latent variable. First of all, study of attitudes concept is presence everywhere in modern social science. Various attitude measurement method is used in social science research to achieve the objective of the study. In addition, social science often deals in concepts which are constructs, known as latent variable rather than the directly measurable variables. A latent variable is variable that cannot be observed so it is difficult to be measured. The measures used to study the attitude in a study should be consistent and accurate so that the results of the study are believable. Therefore, it is important to address the issue in attitude measurement in order to make sure that the measurement of attitude is reliable and valid.

Reliability of Attitude Measurement

Reliability refers to the degree to which, each time it is performed, a measure generates the same number or score (Hays & Revicki, 2005). There are two types of reliability involve in attitude measurement; internal consistency and test – retest reliability. A reliable measures should have internal consistency. Internal consistency reflects the extent to which items within an instrument measure various aspects of the same characteristic or construct. It a way to gauge how well a test or survey is actually measuring what we want to measure. For example, all the items in an explicit measures should reflect the same fundamental construct so that the participant’s scores on the item correlated with each other. High internal consistency means that the measures is measuring the construct well. Low consistency means that the measure is measuring different construct.

A reliable measure should also produce a consistent score across time when measuring a construct. This is called test – retest reliability. In other words, the same test is given to the same participant at different time to see the consistency of the score. If the association of the scores over time is high, it shows that the measure have good test – retest reliability.

Reliability in Explicit Measures

Reliability in explicit measures shows high reliability in both internal consistency and test – retest. For example, in a study by de Leeuw et. al (2019) titled Development of an International Survey Attitude Scale: Measurement Equivalence, Reliability, and Predictive Validity, the result of the study shows satisfactory reliability. The reliability of the measures which was indicated by McDonald’s Omega and Cronbach’s alpha shows the reading at 0.76 to 0.83. In a study titled, Reliability and Validity of The Sexual Experiences Survey–Short Forms Victimization and Perpetration Johnson (S. M., Murphy, M. J., & Gidycz, C. A., 2017), the reliability of the measures are at proper internal consistency and high percentage of test – retest reliability. The purpose of this study was to provide psychometric data on the updated Sexual Experiences Survey–Short Form Perpetration (SES-SFP) and the Sexual Experiences Survey–Short Form Victimization (SES-SFV). The internal consistency for the item are at 0.92 to 0.98 of Cronbach Alpha’s reading. The participants redo the same survey again after two weeks to see the consistency of the score. After two weeks, 70% to 90% had the exact same score of the initial survey. According to Alwin (2017), the most reliable measurement results are when fewer response categories are used.

Reliability of a measure declines with increasing the number category options (Revilla,Saris, and Krosnick 2014). When there are many categories option, especially category option that has middle category introduces ambiguity. The middle category in 3 or 5 category options makes the respondent choose the middle category or neutral response to an item when they are uncertain because it is easier and less effort needed. Fewer category option is easier for respondent to make choices. Random choices introduce random error in the measurement thus decreasing the reliability of the measures. Alwin (2017) also states that unipolar response scales have higher reliabilities than bipolar rating scales. Unipolar scale indicates a respondent to think of the presence or absence of a trait or attitude. Bipolar scale refers to the degree or intensity of attitude toward the item. Unipolar scales are easier for respondent to choose because the scale are more direct, in contrast to bipolar scales in which the respondent have to choose based on the intensity or degree or neutrality of their attitude.

There are other source of random error that effect the reliability of explicit measurement. Non – existant of attitude can introduce random error in survey research (Converse, 1964). Respondent can have little to no knowledge or no opinion and attitude to a certain item in instrument. However, when someone was chosen as respondent for a survey, they are pressured to give their opinion even when they have none. This could lead the respondent to choose any option in the response category and thus increase the random variation in the measures. It is important for the researcher to pick the right respondent for their study so that it will be reliable. According to Bem (1972), some random variation in attitude measures are the result of ambiguity in attitudinal cues. Some have their own firm thought or attitude to the item in an instrument while some others may have conflicting or undeciding response to certain attitude. The response ambiguity will increase the amount of random measurement error because it forces respondent to choose any random choices thus reduces the reliability of the measures. Lastly, the ambiguity of response scale alternatives also introduces random error in attitude measures. The ambiguous meaning of the response scale alternatives make the respondent difficult to map their attitudinal cues. For example, in a research to study how often respondent smokes cigarettes in a day, using response option such as constantly, frequently, sometimes, rarely and never makes it challenging for the respondent to choose their response as the response alternatives are ambiguous. It will be better to for the researcher to ask how many cigarettes do the respondent smoke in a day with response alternative in scale of numbers. This way, the meaning of response alternative is clearer and it is easier for the respondent to direct their attitudinal cues.

Reliability in Implicit Measures

Reliability in implicit measure shows satisfactory internal consistency and weak to moderate test – retest correlations. Chevance et al (2016) and Rebar et al (2015) in their study on physical activity and sedentary behaviour research which uses Implicit Association Test (IAT) as measurement method shows satisfactory internal consistency reliability. These studies show that the internal consistency of both the IAT and SC-IAT, with split-half correlations and Cronbach alphas usually ranging from 0.70 to 0.90. These values are satisfactory according to current standards and are better than those obtained when using other indirect measures (Gawronski & De Houwer, 2012). In a study by Lane, Banaji, Nosek, and Greenwald (2007), 20 studies was reviewed in which IATs was used on the same individual twice across time shows weak to moderate reliability with the Pearson rs reading at 0.25 to 0.69, with a mean of 0.5. According to William and Steele (2016), the test – retest reliability in their study of race attitude in children shows less satisfactory reliability (r = 0.24) that internal consistency (α = 0.7). Children may have felt tired during the second measure, increasing the error variance and thus decreasing the test – tetest reliability.

There are a few challenges that may affect the reliability of implicit measures. IAT uses reaction time to measure the strength of association. The use of reaction time makes IAT at risk of increasing the random variation which thus making it difficult to assess the reliability of the measure (Rezaei, 2011). During IAT, individual can feel pressured to answer the test because of the reaction time. This can lead to random choices which can increase random variation. According to Blanton and Jaccard (2008), a split of a second can have a consequent effect on a person’s score. Therefore, when analyzing Implicit Association Test (IAT) results, researcher need to avoid jumping into conclusion that the test is unreliable. In addition to that, unfamiliarity with IAT can also lead to decreasing reliability of the test. If a respondent not familiar with how IAT works, it could lead to increase reaction time which are used to measure the strength of association. Rezaei (2011) also suggested that researcher include trials for respondent to practice the test before actual study. Therefore, making the respondent familiar with IAT can improve the reliability of the test.

Make sure you submit a unique essay

Our writers will provide you with an essay sample written from scratch: any topic, any deadline, any instructions.

Cite this Page

Attitude Measurement And Issue. (2022, February 21). Edubirdie. Retrieved March 28, 2024, from https://edubirdie.com/examples/attitude-measurement-and-issue/
“Attitude Measurement And Issue.” Edubirdie, 21 Feb. 2022, edubirdie.com/examples/attitude-measurement-and-issue/
Attitude Measurement And Issue. [online]. Available at: <https://edubirdie.com/examples/attitude-measurement-and-issue/> [Accessed 28 Mar. 2024].
Attitude Measurement And Issue [Internet]. Edubirdie. 2022 Feb 21 [cited 2024 Mar 28]. Available from: https://edubirdie.com/examples/attitude-measurement-and-issue/
copy
Join 100k satisfied students
  • Get original paper written according to your instructions
  • Save time for what matters most
hire writer

Fair Use Policy

EduBirdie considers academic integrity to be the essential part of the learning process and does not support any violation of the academic standards. Should you have any questions regarding our Fair Use Policy or become aware of any violations, please do not hesitate to contact us via support@edubirdie.com.

Check it out!
close
search Stuck on your essay?

We are here 24/7 to write your paper in as fast as 3 hours.