All posts by Ellen Nierenberg

I am a research fellow in information literacy at UiT The Arctic University in Norway (Tromsø). I'm affiliated with both the University Library and the Department of Psychology. I'm taking a leave of absence from my permanent job as a university librarian at Inland Norway University of Applied Sciences (Hamar).

Events in the US are proof of the importance of information literacy!

The violent storming of the US Capitol in Washington DC yesterday, during the electoral college confirmation of Joe Biden and Kamala Harris as President and Vice President of the United States, shows why being information literate is important. Disinformation being spread in the US, particularly by the current president, incited mobs in a dangerous and disruptive insurrection, the likes of which have not been seen since 1814.

One of the main tenets of information literacy (IL) is that we should be critical to our sources of information, and use those that are reliable. But that’s difficult when the President of the United States (POTUS) – arguably the most powerful person in the world – spreads conspiracy theories and other disinformation about how the election “was stolen” from him. The internet provides multiple platforms for this disinformation to spread instantaneously, providing an echo chamber for Trump supporters to reinforce their beliefs.

POTUS’s followers get their information mainly from biased, conservative channels like Fox News and Breitbart News, social media, QAnon (supports fringe conspiracy theories), and from POTUS himself. Trump’s megaphones, Twitter and Facebook, have now locked his accounts for 12 hours to prevent the spread of his lies and his encouragement to those rioting. POTUS is being censored.

Trump calls media “thieves and crooks,” sowing distrust in reliable newspapers like the New York Times and the Washington Post, and encouraging his supporters to rely on him for their information – an obvious characteristic of authoritarianism. This is dangerous for a democracy, where citizens vote for their government representatives based on the information they read and hear.

This sad chapter in American history can thereby be blamed on ignorance, caused by poor information literacy skills. Too many citizens have relied on biased sources of information. Perhaps, if people had consulted more reliable sources of information instead of believing blindly in a delusional president, the events of the past 12 hours could have been prevented.

First article accepted for publication!

The first article for my PhD has been accepted for publication in the Journal of Information Literacy! 🙂 I wrote the article, called “Knowing and doing: The development of information literacy measures to assess knowledge and practice,” together with Torstein and Tove.  It’ll be published in the June, 2021 issue.

You may remember my blog post from August 21 called “Peer review of my first (attempted) article,” in which I expressed how discouraging it was to receive a review that was several pages long, and required major revisions in this article. I’d thought at the time that it was quite alright as it was. However, the article is much better now, after the revision. So although it was a lengthy  process, it was well worth it.

The reviewers did a thorough job, and asked really good questions. We went through every comment, and either revised the article accordingly or argued for why we didn’t agree that the change was necessary. As we worked we wrote a detailed reply to the reviewers, so they could easily find the right spot, and see our reasoning.

The reviewers wrote that the framing of the article is now much clearer, and that the paper as a whole is “more consistent and focused, resulting in a much stronger article overall.” They believe that with this article, we’ve made a significant contribution to the conversation about how we think of the information-literacy-construct .

🙂

 

 

Midway assessment for my PhD

Today I had my midway assessment, a milestone for every PhD student. 🙂

This is how UiT describes it: “The midway assessment shall provide the student and supervisor with an independent assessment –
evaluating whether the student has adequate progression to complete the PhD education according to schedule. The student shall receive specific feedback on his/her work so far, and get suggestions for the further work. The midway assessment provides the department with an opportunity to discern students that need structured follow-up. It is expected that such an assessment will improve the progress of the project, and increase the likelihood that the student completes the course of study within prescribed time.”

I sent in several documents ahead of time, and presented my research today to a committee (one professor from UiT and one from the University of Oslo), and to my 3 supervisors. Because of the pandemic, everything was on Zoom.

After the presentation we discussed my research, and I received lots of useful feedback that will help with the rest of my project. The professor from UiO is an expert in quantitative methods, in the field of education/special education. She had several good arguments for why I should include qualitative methods in my research:

  • Information literacy, by nature, is a field that is also qualitative, and shouldn’t only be explored quantitatively (although this is also a useful contribution).
  • If I want to publish articles in more general, educational journals, with larger visibility and more impact than those in the information literacy niche, I should use other kinds of analyses. Not just quantitative. I could use “mixed methods.”
  • If I only use quantitative methods, everything I write will be peer-reviewed by statisticians, and they can be very demanding, and perhaps concentrate more on the numbers than on the implications of the findings.
  • The analyses and statistics involved in doing a longitudinal study (which I’m in the process of doing) are extremely complex, and it can take years to master them.

She also encouraged me to compare students’ scores on the IL tests/measures, to outcome measures such as grades and completion of college degrees. That would make my research more interesting and relevant.

This was good advice, and I really appreciate that she used so much time and effort to evaluate my work. 🙂 It was incredibly useful to get input from an external expert, who was previously uninvolved in my research, and who could examine it through a new lens.

Of course it was hard for me to hear that I’m slightly off-track, but it’s better to hear it now than even later in the game, I guess… Although it will be challenging to change my direction at this point, it’s probably wise. (And after I’ve digested this newest input for a little longer than 3 hours,  I’ll probably be even more convinced.) My study design has already gone through several revisions, so why not one more?

I’ve come to realize that doing a PhD means being constantly confronted with new intellectual challenges and continual revisions in plans. It often feels like my brain is doing somersaults, which somehow keep me on my feet. 😉

A big thank you to the two professors on my committee and to my three wonderful supervisors! 🙂 I feel privileged, humbled and grateful, once again.

Peer review of my first (attempted) article

 

Three months and two days after I submitted the first article for my PhD to a journal for publication, I finally got a response from the peer-reviewer(s). It wasn’t exactly as I’d hoped.

The editor first pointed out that the article (written together with 2 of my supervisors) was interesting, well-written, and relevant to the journal’s scope, but then wrote about aspects that need to be addressed. These range from the framing of the article, to the methods and statistics. The reviewers’ comments were attached.

The editor then wrote “If you are willing to revise the work along the suggested lines, we would be pleased to receive a resubmitted version for review.” I’m not sure what this actually means though. Would the new version have to go through an entirely new peer-review that could take another 3 months, and possibly be rejected again? Or would it be re-evaluated by the same reviewers, controlled only for the changes they suggested? If it’s the former, should I consider sending it to another journal instead?

The list of changes the reviewers suggested, by the way, was several pages long! (As opposed to my last 2 articles, before I started my PhD, that needed only very minor changes.) They were mostly good points though, if I’m to be honest with myself. Some are causing us to look again at the basic assumptions of our article – can we really call this an intervention study? Why are we actually using Cronbach’s alpha to measure internal consistency, when information literacy is a multidimensional construct? Should we dwell on the point that IL is not unidimensional, and our evidence of this, which was one of the 3 major research questions in the article? If not, were all those factor analyses a waste of time?! Argh!! That was all I did (tried to do) last summer!

I was quite discouraged, needless to say, especially after all the work that went into this article, and the long wait for the response. All that effort cannot have been in vain! But after talking to Torstein, I have hope that we can improve the article and resubmit it to the journal. He says that this is totally normal – standard procedure.

I felt certain that the article was publishable when I submitted it. We sure worked hard on it. But maybe we became blind to our own thought-patterns and written words?

Quantitative research takes a lot of time and effort, between creating and piloting the measurement instruments, gathering the data (in many stages, with many different samples, in our case), analyzing and visualizing the data, and then actually writing the article.

This makes me think about whether the entire purpose of scientific endeavors should be to publish? Is that really the most important thing? After all, I have learned a lot doing this research, and I could disseminate findings at conferences or in my blog…

However, my PhD is article-based. I have to publish (or have ready for publication) at least 3 articles, in addition to writing the summary (kappa). In October I’ll be halfway through my 4-year period of funding for this project. I’m trying not to worry about not having published anything yet. This first article can be revised and (hopefully) published, and the second article is underway. And I have great supervisors (have I perhaps said this before?) who aren’t worried.

And now, onto the revisions…

 

“Fake news” in the corona-era

What exactly is fake news? How is it different from the more verifiable terms misinformation and disinformation?

Misinformation is information that is not true, but is believed to be true by the the person who disseminated it.

Disinformation is also false information, but it differs from misinformation in that the person who disseminates it, knows that it isn’t true. It is a deliberate lie, often with malicious intent.

(Hint: you can remember the difference by thinking of the word “diss.Disinformation often attempts to diss someone.)

Fake news has no formally accepted definition – in fact its meaning has changed significantly over the past 4 years. Previously, the term fake news was occasionally used for misinformation, but mainly for disinformation. A famous disinformation example is the “Pizzagate” incident, an attempt to influence the results of the 2016 presidential election in which candidate Hillary Clinton was accused of leading a child-abuse ring based in a Washington, DC pizzeria.

The term gained popularity during this election, but changed character when Trump began describing everything that he didn’t like in the media as “fake news.” One of the first examples of this was when he called reports of low attendance at his inauguration “fake news,” despite factual evidence of the meager turnout.

This makes the term “fake news” confusing and unhelpful, as it was previously mainly used for false information (both mis– and dis-), but is now frequently used for true information that someone doesn’t like. We should therefore avoid using the term “fake news” completely, and instead use “misinformation” and “disinformation.”

So where on the “information disorder spectrum” (as UNESCO calls the range of information pollution) are the many lies being spread about the corona-virus? Much of this is misinformation about the virus’ origin, prevention or treatments, spread by people – even presidents – who believe it to be true. This false information is often partly based on true information that has been twisted or reworked, as opposed to being purely fabricated. Some examples of later-debunked misinformation:

  • Vitamin D can prevent the corona-virus (spread on social media in Thailand)
  • Africans are not susceptible to corona-virus (spread on WhatsApp in Nigeria)
  • Drinking cow urine can cure COVID-19 (spread by a politician in India)
  • 5G towers cause corona-virus (spread in a French blog)
  • Your faith and God will protect you from contracting corona-virus (spread by several religious groups to their followers)
  • Injecting disinfectants can effectively treat the virus (spread by you know who, on live TV)

Some of the false information we hear about the corona-virus however is created and disseminated with malicious intent. Some examples of disinformation and various related conspiracy theories :

  • The US is the source of the virus, and they’re using it as “hybrid warfare” against China and Iran (spread on Iranian TV)
  • North Korea and China conspired together to create the corona-virus (spread on Fox News in the US)
  • The virus is a biological weapon created by the CIA to destroy China’s economy (spread on social media in Russia)
  • The corona-virus came from an accidental leak at a Chinese biological weapon lab in Wuhan (spread in some American news sources)

The spread of both mis- and disinformation can obviously have serious consequences, including injury, death or international conflict. WHO has therefore created a webpage to provide factual health-related information to bust many of the circulating myths about the virus.

Spreading false information is easier than ever – just click SHARE on your favorite social media. Research has shown that false information, because it can be so unbelievable and scary, spreads much faster and deeper than true information.

So what can you do to prevent the spread of false information?

  • Think critically!
  • Vote.
  • Check facts before you share posts on social media, even if you think that the information might truly be helpful to your friends. (There are several fact-checking websites out there, such as www.factcheck.org and www.snopes.com )
  • Be wary of anonymous sources.
  • Use trusted sources of information.
  • Check also the recommendations and advice provided on official government websites or international organizations such as WHO.
  • Tell your friends who spread dubious information to delete it.
  • And if you’re a college student, look at your library’s webpages for useful information about evaluating sources, and attend  courses offered by your wonderful librarians! 🙂

This is a pandemic. It’s affecting the entire world. If we want to defeat it, we have to be smart. So why am I posting this on my blog about information literacy? Because thinking critically is a huge part of being information literate!

 

 

First article is (nearly) done!

My data

This poor blog has been neglected for quite a while, since I’ve been concentrating all my efforts on analyzing data and writing the first article for my dissertation. As opposed to a monograph dissertation, I’m doing a “compilation thesis,” which is a series of articles (at least three), together with a summary section (kappa).

The first article, with working title “Knowing and doing: The development and testing of information literacy measures,” has been an enormous effort, as it’s based on data from several different samples, collected at different times. I wrote it, for the most part, together with my advisor Torstein, who has provided excellent guidance throughout this process. Just the right combination of “here’s the answer” and “here’s how to do it yourself.” (Plus a good dose of neurons, logic, experience, and patience!)

If I’d written this article alone, it would’ve been done much sooner, but it would’ve been much worse. I’ve learned so much through this process, especially about how to structure an article based on empirical data, and the logic behind each section. It sounds so easy – Introduction, Methods, Results, Discussion – but it was actually quite difficult to separate these sections while preserving readability.

This article could’ve potentially been several, since each of its 3 main goals is nearly enough for an article in itself (especially the first):

  1. “to develop information literacy measures that are applicable across academic disciplines, and that are brief and easy to administer, but still likely to be reliable and to support valid interpretations”
  2. “to determine whether what students know about IL corresponds to what they actually do when finding, evaluating and using sources”
  3. “to help illuminate the question of whether IL should be conceived of as a coherent, unitary construct, or a set of disparate and more loosely related components”

Just look at 2 terms in the first goal: reliable and valid. I had no idea how important these concepts are when developing measurement instruments, how many analyses would have to be performed in order to “conclude” anything about reliability and validity, and how many words would be needed to describe these analyses.

We’ve had to economize with words, which surprisingly, is quite difficult. The journal we’re aiming to publish it in has a limit of 8000 words, and we’re currently at ca. 7900.

The research is churning in my head whether I’m sleeping or skiing. I’m proud of myself for being disciplined, concentrated, and persevering throughout the process of collecting and analyzing the data, and then writing the article. Nothing has come easily – I’ve worked hard for everything I’ve accomplished. Luckily, I haven’t had too many other things going on for the past months (social isolation suits me just fine these days!), and could immerse myself in my work without losing track along the way.

The next blog post will be about the importance of information literacy the age of Covid-19. 🙂

My data doesn’t make sense

I’ve spent months collecting and analyzing data from students regarding their information literacy knowledge and skills. For one study, I’ve used a survey to measure their knowledge and two written assignments to measure their skills. The idea is to see if there’s a correlation between these levels, in other words – is what they know reflected in what they do? (multiple regression analysis)

There are all kinds of analyses to perform even before asking that question however, including:

  • is the survey reliable? (using e.g. a split-half reliability test)
  • do survey questions (items) form logical groups (factors)? (factor analysis)
  • are the tests valid? (lots of analyses)

So far, my results in this study are puzzling, to say the least. Correlations that I’d expected to see in my data, do not exist. For example, there’s a negative correlation between the amount of higher education students have had, and their levels of IL. Huh? The more education, the less they know??

As for reliability, whether my survey items produce accurate, reproducible, and consistent results, I get negative results sometimes! (See clip from SPSS below.) How is this possible, when – in my eyes – the survey questions (inside their 3 categories) are related to each other?

I’ve double-checked that my data is coded correctly, so that’s not the problem. It just doesn’t make any sense! It seems as though students have answered totally randomly on the survey. They may know one answer about the critical evaluation of information, but not the next, even though the question is quite similar.

If I could just find ONE meaningful correlation or significant result in this study, I’d be satisfied, but so far I’ve found none. I’m not finished collecting data, of course, so perhaps something meaningful will magically appear in future results. But so far, I’m just perplexed, and yep – frustrated. Argh!

I’ll have to start thinking “outside of the box” in order to interpret these results. Maybe the holiday break will help my brain to reboot? It’s all extremely challenging, but at least I’m learning to do research…

Nagging questions like “Will I be able to publish these seemingly meaningless results?” and “Can I get a PhD even if my data doesn’t make sense?” will hopefully take a place on the back-burner for the time being. There are certain things that I simply can’t do anything about, so it’s best to not focus on them. I’ll just plow on, doing the best that I can.

(And for the astronomically-interested: in two days is the winter solstice. On this day, at its highest, the sun here in Tromsø will be ca. 5 degrees BELOW the horizon. Not even the highest clouds are touched by its light. There’s one more month of polar night.)

How to safely save personal data

I’ve been collecting survey data from students for my research. In some surveys I ask for the students’ names or e-mail addresses. In order to protect the students’ privacy and assure information security, this personal information cannot be saved together with the rest of the collected data.

I therefore created an ID-number for each student, and made a “scrambling key” connecting this ID-number to their personal information. I replaced the students’ personal information with these new ID-numbers in the survey data.

I ended up with two documents – an anonymized data file, and a key with students’ personal data and ID-numbers. These two documents cannot be saved together, because if a hacker finds them, they’ll be able to connect the two and find out how a particular student answered survey questions.

The instructions from the Norwegian Centre for Research Data (NSD) do not specifically state how to hold the documents separate, and neither do UiT’s webpages on information security. Is it sufficient to save one file in Teams/Sharepoint (cloud-based team collaboration software, where files can be stored and shared), and another in OneDrive (the online cloud storage service used by UiT for sharing and editing files)? Both of these systems have the same username and password.

I posed this question to IT support at UiT and received an answer several days later, after they discussed the issue. Since the two cloud storage systems, Teams/Sharepoint and OneDrive, have the same login, they’re not considered separate entities. Someone with my password could compromise these storage locations and gain access to both documents.

IT support’s recommendation was to save the survey data in one of the cloud storage systems (accessible on my PC), and the scrambling key on a memory stick (and only on the memory stick). This  should be locked in a cabinet, physically removed from the rest of the data.

So that’s exactly what I did, and what I recommend to others in similar situations. Better safe than sorry!

My one-year anniversary as a PhD-student!

Here are some honest reflections on my one-year anniversary as a PhD-student here at UiT:

  • I’d hoped and expected to have made more progress in my research by now. I’ve heard many other PhD-students say the same. My supervisors aren’t surprised by my “slow” progress – apparently this is normal. It’s scary that 25% of my time here is already over, and I haven’t even finished writing my first article yet.
  • Every step in the process of doing research involves complex decision-making, based on knowledge and experience.
  • The insight that research plans constantly change and evolve, depending on response rates and other factors that we’re not always in control of, makes me realize the value of crossing bridges as you come to them, and not having too many expectations.
  • I’m not as disciplined with my work as I thought I’d be, and as I have been in the past. I need to plan better and set aside more time for reading and writing. The practical parts of the research, and my compulsory duties this semester, take most of my time. This leaves me with the constant feeling of “I should be doing more.”
  • I’m learning so much about doing research, about information literacy, and about how a university library functions (as opposed to a smaller college library)! All of this will be  useful for me in the future.
  • My motivation comes and goes, but that nagging feeling I had during my first months here – “Did I make the wrong decision?” – bothers me less with time.
  • Doing a PhD is a form of self-torture, with emotional ups and downs that sometimes make me feel as if I’m on a roller coaster. So why am I doing it? To learn! (And to convince myself that, even at my age, I still can!)
  • I love teaching. When I’m with students I feel useful, and that I’m doing something worthwhile. It’s more rewarding than (some aspects of) doing research. My research results will hopefully influence how we teach IL.
  • The fact that I’m earning less as a PhD-student than before bothers me more than I thought it would. Although earning a higher salary in the future was not my main motivation for doing this PhD, I really do hope that it pays off someday.
  • My bond with my supervisors is stronger than I’d anticipated. They give me advice when I need it, we have frequent meetings, and we really enjoy each others’ company. They are absolutely amongst the smartest and nicest people on this planet, and I so appreciate their support, wisdom, brilliance, and constant encouragement.