Select Page

La Salle University Aggression Among Students Discussion Questions

Question Description

6 questions:

Read the real world example below from the Daily Beast and answer the questions that follow.

Watching ‘Real Housewives’ Makes You Violent. At least, that’s the claim in the headline from the Daily Beast. I imagine that most of us feel squeamish after watching episodes of trashy television. But can it really make us violent? Fortunately, psychological researchers are able to test such a claim, and they recently did so. Here’s how one journalist explained the study:

127 college students watched Real Housewives and Jersey Shore to ascertain the effects of verbal aggression on-screen, Little People, Big World and The Little Couple to see supportive relationships, and Dexter and CSI to gauge the impact of physically hostile shows.

Half of the study’s participants then received an ego threat (a threat to a person’s self image) while the other half did not….and aggressive behavior was then quantified using “the intensity and duration of noise administered to an ostensible opponent on a competitive reaction time task.”

This is a fairly clear description of the method. In describing the results, the journalist should have mentioned that the results of the study showed that overall, people who received an ego threat were more aggressive than those who did not receive one. What about the effects of the different TV shows? The journalist reports:

… those who had been dabbling in the dark arts of the Shore’s GTL crew had the most aggressive reactions, casting our viewing habits into an entirely new light.

…The study also showed that those who watched violent crime dramas weren’t exactly sweetness and light afterwards: such shows also provoked a more aggressive response from participants than the family-centered programs did, refreshing enduring fears about the impact of screen crime on real lives.

1. What kind of claim is it to say, “Watching ‘Real Housewives’ Makes You Violent?” (Hint: Does the study appear to be an experiment or a correlational study?) Please explain your answer.

2. What are the variables in this study? For each variable, indicate whether it is an IV or DV (or can’t you tell?).

3. How accurately does the headline depict the results of the research study? Write a “good title” that could be submitted for publication.

4. If you were a member of a research governing body at your institution, would you approve this research? Support your decision.

A writer for Huffington Post has summarized a set of studies that compared two forms of note-taking: laptop-based notes, and traditional paper-and-pencil notes. Is it better to take notes on a laptop, or by hand in a notebook? Here’s what he writes:

Ink on Paper: Some Notes on Note-taking

I went to college long before the era of laptops, so I learned to take notes the old-fashioned way: ink on paper. But that does not mean my note-taking system was simple. Indeed it was an intricate hieroglyphic language, in which asterisks and underscoring and check marks and exclamation points all had precise meaning, if only to me.

It’s a lost art. Many college students have some kind of electronic note-taking device nowadays, and most will swear by them. And really, only a Luddite would cling to pen and notebook in the 21st century. Typing is faster than longhand, producing more legible and more thorough notes for study later on.

But has anyone actually compared the two? Is it possible that laptops somehow impair learning — or conversely, that pen and paper convey some subtle advantage in the classroom? Two psychological scientists, Pam Mueller of Princeton and Daniel Oppenheimer of UCLA, wondered if laptops, despite their plusses, might lead to a shallower kind of cognitive processing, and to lower quality learning. They decided to test the old and the new in a head-to-head contest.

Of course, students could develop an elaborate hieroglyphic system using a laptop. Keyboards have asterisks and exclamation points and so forth. They could also go beyond mere verbatim transcription, summarizing and paraphrasing. These are the strategies that in theory lead to deep processing and firmly encode new material in memory. But do typists do this, or do they just type as fast as they can? That is one of the questions that Mueller and Oppenheimer wanted to explore in a real-world setting.

They ran a few experiments, all basically the same. In the first one, for example, college students were assigned to classrooms, some of which were equipped with laptops and others with traditional notebooks. They all listened to the same lectures, and they were specifically instructed to use their usual note-taking strategy. Then, about half an hour after the lecture, all of the students were tested on the material covered in the lecture. Importantly, they were tested both for factual recall (How many years ago did the Indus civilization exist?) and for conceptual learning (How do Japan and Sweden differ in their approaches to social equality?).

This experiment provided preliminary evidence that laptops might be harmful to academic performance. The students using laptops were in fact more likely to take copious notes, which can be beneficial to learning. But they were also more likely to take verbatim notes, and this “mindless transcription” appeared to cancel out the benefits. Both groups memorized about the same number of facts from the lectures, but the laptop users did much worse when tested on ideas.

At least right away. Remember that they were tested half an hour after the lecture, without opportunity for review. But what if these students did what students commonly do — leave the lecture, go back to the dorm, go about their lives, and at some point in time pull out their notes to study for an exam? Would having more thorough, transcribed notes prove an advantage in this more natural setting?

The scientists tried to simulate this in another experiment. As before, some of the students took notes with a laptop, others with pen and notebook, as they listened to talks on various topics. They knew in advance that the exam would take place in a week, and that they would have a chance to study beforehand. As before, the test covered simple facts as well as concepts, inferences and applications of the material.

The findings, which Mueller and Oppenheimer describe in a forthcoming issue of the journal Psychological Science, were a bit surprising. Those who took notes in longhand, and were able to study, did significantly better than any of the other students in the experiment — better even than the fleet typists who had basically transcribed the lectures. That is, they took fewer notes overall with less verbatim recording, but they nevertheless did better on both factual learning and higher-order conceptual learning. Taken together, these results suggest that longhand notes not only lead to higher quality learning in the first place; they are also a superior strategy for storing new learning for later study. Or, quite possibly, these two effects interact for greater academic performance overall.

The scientists had an additional, intriguing finding. At one point, they told some of the laptop users explicitly not to simply transcribe the lectures word-by-word. This intervention failed completely. The laptop users still made verbatim notes, which diminished their learning. Apparently there is something about typing that leads to mindless processing. And there is something about ink and paper that prompts students to go beyond merely hearing and recording new information — and instead to process and reframe information in their own words, with or without the aid of asterisks and checks and arrows.

  • What kind of an experiment is this? Post-test only? Pretest-posttest?
  • What are the IV’s and DV’s in the study? (Hint: there are at least two DV’s)
  • Assuming that they randomly assigned participants to the two conditions, can you support the causal claim that “taking notes on a laptop causes students to perform worse on conceptual questions about lecture material”?

Question 33 pts

Here is an interesting study covered by the Huffington Post. The headline was as follows:

Extroverted Children More Likely to Be Swayed By Environmental Cues: Study

(Links to an external site.)The study…included 18 kids ages 6 to 10, whose levels of introversion and extroversion were rated on a scale by teachers and counselors.

On one day, the kids were served breakfast by adults; they were given a large [or a small] bowl, and then told the adults how much cereal and milk they wanted to have. On another day, the kids served themselves breakfast [after being given a large or small bowl]. The amount of food served — whether by the adults, or the kids themselves — was secretly weighed by scales hidden in the tables.

Extroverted kids served themselves 33.1 percent more breakfast when they had the larger bowl, compared with introverted kids, who only served themselves 5.6 percent more when they had a larger bowl.

When the adults were serving for them, both extroverted and introverted kids asked for more than 50 percent more when they had a bigger bowl.

  • What is the study design?
  • What are the IV’s and the DV in this study?
  • What are the implications of this study?

Question 43 pts

Here are some quotes from the story, “Giving kids sips of beer turns them into teenage drunks”, posted in the food website Munchies:

Those innocent tastes of Chianti at the Thanksgiving dinner table could morph your child from a sweet, sober cherub into a bleary-eyed teenage booze-guzzling ne’er-do-well.

New research in the Journal of Studies on Alcohol and Drugs has found that children who sip alcohol as youngsters have an increased likelihood of becoming drinkers by the time they reach high school. In a long-term study by Brown University of 561 students in Rhode Island, researchers found that those who had tried even small sips were a whopping five times more likely to have tried a whole beer or cocktail by the time they reached ninth grade, and four times more likely to have gotten rip-roaring drunk.

1. Given the study’s design, is a causal claim appropriate?

2. What were the two variables studied by the researchers? Explain whether you think each one was measured or manipulated.

3. Name some possible 3rd variables.

Question 53 pts

This CNN story reports on research presented at an academic conference for Pediatrics. According to the conference presentation,

“A study found that the more time children between the ages of six months and two years spent using handheld screens such as smartphones, tablets and electronic games, the more likely they were to experience speech delays.”

According to this description, the two main conceptual variables in the study are “time spent using a handheld screen” and “speech delay.” Read on to find out how each variable was operationalized:

“In the study, which involved nearly 900 children, parents reported the amount of time their children spent using screens in minutes per day at age 18 months. Researchers then used an infant toddler checklist, a validated screening tool, to assess the children’s language development also at 18 months. They looked at a range of things, including whether the child uses sounds or words to get attention or help and puts words together, and how many words the child uses.”

  • According to the text, how did the study operationalize the variable, speech delay? How did they operationalize time spent using a handheld screen? Do you think this was a valid measure? Why or why not?

2. What would the scatterplot of the association they reported in the first quoted section look like?

3. Does this association allow us to conclude that “exposure to handheld screens causes children to experience speech delays?” Why or why not?

Flag question: Question 6

Question 64 pts

Fake news is in the (real) news lately. Whether you’re looking at Facebook, Buzzfeed, or your online newspaper, companies may try to clickbait you into reading a story that’s false. Companies may want you to read the story so that you’ll be exposed to their advertising. Or a political group may want to persuade you of an extreme opinion. In some recent cases, people have read fake news stories, believed them, and then acted according to what they thought was true.

How often do people mistake fake news for real news?

A team at Stanford University recently attempted to measure the problem in a large sample of high school students. The results of their study were summarized by the Wall Street Journal (Links to an external site.). The journalist from the WSJ reported the following:

…82% of middle-schoolers couldn’t distinguish between an ad labeled “sponsored content” and a real news story on a website, according to a Stanford University study of 7,804 students from middle school through college. The study is the biggest so far on how teens evaluate information they find online.

The study apparently showed students several examples, asking them for each one if it was a real story or fake news.

Here are some more results, reported by the WSJ:

More than two out of three middle-schoolers couldn’t see any valid reason to mistrust a post written by a bank executive arguing that young adults need more financial-planning help. And nearly four in 10 high-school students believed, based on the headline, that a photo of deformed daisies on a photo-sharing site provided strong evidence of toxic conditions near the Fukushima Daiichi nuclear plant in Japan, even though no source or location was given for the photo.

1. Is this an experiment?

2. What is (are) the variable(s) in the claims being made by this study?

3. In order to claim that “82% of middle schoolers” do something, you’d probably need to be sure that the study included a generalizable sample of middle schoolers. What are some ways the researchers could have obtained an externally valid sample?

4. For a frequency claim like this one, construct validity is also important. Reading back through the quotes above, you’ll see three different ways they measured the variable, “knowing when news is fake.” What are the three ways?

"Place your order now for a similar assignment and have exceptional work written by our team of experts, guaranteeing you "A" results."

Order Solution Now