
The Skills Pod
Members of the University of Chester’s Academic Skills Team chat all things Academic Skills, sharing advice and anecdotes from their own experience in higher education. We have episodes on skills like referencing, critical thinking, maths and statistics, and time management.
Listening to The Skills Pod is a great way to learn hints and tips to help you during your academic journey while getting to know the Academic Skills Team.
The Skills Pod
Evaluating a Journal Article
Join the University of Chester's Academic Skills Team for The Skills Pod. In this episode we chat about evaluating a journal article. We go through breaking down a journal article and strategies/tools you might use when critically evaluating quantitative and qualitative sources. This episode features Anthony, Emma, and Mikayla.
Hi everyone, and welcome to this episode of the Skills Pod, where today we're discussing how to evaluate a journal article.
I'm Anthony, one of the academic skills advisors, part of the academic skills team, and today I'm joined by... Hi,
I'm Emma, and I'm also one of the academic skills advisors. I'm Mikayla, I'm one of the senior academic skills advisors for maths and statistics.
Thank you both for joining us today. So today you say we're going to break down how do we evaluate a journal article? So
where are we going to start with this, do you think, today?
I suppose from my perspective,
obviously, we kind of come from
very different types of papers that we tend to maybe feel a little bit more comfortable with, a
little bit more familiar with.
But I think we can probably all agree that the best place to start is to think about
kind of what is the overall kind of aim, what is a research question, whether or not that is something that is very kind of quant heavy or qual heavy,
or
maybe mixed methods,
is thinking about, well, what is it that,
what is it that the researchers maybe aim to achieve and how did they approach that?
Even at kind of that fundamental level is, is it quant, is it qual, mixed?
What are we trying to,
what are they trying to
to talk to us about.
Yeah, I think
for me, it's amazing how people don't actually just start in the introduction. They tend to jump, I would say, to results, because that's often the kind of thing that you're trying to find.
But that introduction, the whole why, why are they doing this? What are they trying to solve?
And I think as well, particularly for students, that introduction can be really, really beneficial, because often there's a lot of literature in there, but where their study sits, their reasoning behind it. So
yeah,
That first big question for me of evaluating this article would be
that simple question of why?
Why is this being done and what's it trying to solve and how is it trying to progress? If it's
just doing it for the fun of it,
you know, you kind of question,
okay, why, you know, where is this sitting? What are you trying to solve? How are you relating
to other literature that's out there? I think
it's really important as well,
like you're saying, to kind of get the sense of what they're trying to achieve, what they're trying to do, because when you are reading through the paper,
you need to make sure that they are answering those questions or
showing those results. Because if they're not,
then
that's also, you
know,
something to kind of
evaluate and,
you know,
tear apart a little bit, I guess,
in a nice way.
Critical. Yeah,
critical. Yeah.
And critical isn't just saying that that article is kind of rubbish. I wouldn't have done that research that way. You know, I know that you've spoken a little bit about it before in terms of,
you know, we're not just being critical isn't saying I don't like something, I don't like how they have approached it. I wouldn't have done it that way.
It's thinking about
the approach that they've taken and
what does that mean to the research?
You know, how do the decisions that they've made impact and influence
the outcomes, the approaches that they've kind of, the roads that they've gone down.
Could they have done it differently? And for me,
that's one of those fundamental questions of
how have they approached it?
And
are there any alternatives? We're not saying that their approach is, like you say, is bad. It
might be that their approach is gold standard, and that's okay.
But it is thinking,
how else could they have approached this?
Yeah, and if
if relevant to kind of the,
what you're trying to achieve or the question to answer or your assignment brief,
it might be that you're thinking about that approach that they've taken within
the wider context of other literature that's done similar studies.
And that's a great way to kind of show that analysis is to compare it to other studies
or other similar studies that have been carried out.
Yeah,
for sure. And something that I always like to think about as well, when I am being quite critical about papers is
or sorry, the tools that they've used to measure that.
That's one of the things that I always kind of start to think about is,
is it something that
is reliable? Is it valid? Has it been tested? You know, coming from a stats background, I want to know whether or not, you know,
if I'm trying to measure
a particular construct, are the questions that I'm asking as part of that questionnaire, are
they
actually answering or helping me to identify that particular element?
If not, is that something maybe that they could have done?
You know, if it's a new questionnaire, we're not saying that everybody has to use the same questionnaire
or data collection method,
but
thinking then about, okay, well,
if it is something that's new, that's novel, that maybe hasn't been used before,
have they just kind of used it and gone, yeah, sure, this'll be fine.
Or
have they thought quite carefully and tested, you know, statistically, we can test whether or not
tools measure what we think it's going to measure, or do they measure, let's say, slightly different things? Yeah,
I think it's really important. I think a lot of students, when they get that assignment of critically evaluate an article,
focus on the methodology
is
what
is a real big part of that criticality. You say there are the methods used appropriate.
And that's the same
in qualitative studies. So, you
know,
are they doing what type of interview were they doing? Have they recognized the pros and the cons of that?
Have they talked about a
researcher bias, which is quite inherent in qualitative studies? Have they recognized that?
I think
about the sample size and how things were done, you know, what was their sampling technique?
You know, did they use convenience sampling, which basically means it was just people that they knew, or was there more of a random purposeful sampling, wherever it might be? And have they talked about the pros and cons of each one of them? And I think
one way I often find studies, an easy way to spot a study that's probably not as good as it could be,
is very rarely do those authors discuss limitations
of their methods, I often find, particularly in qual.
It's like, oh, we did this, this and this, wasn't it great? And I think all good researchers would recognise, hey, look, we've done this.
And there are some potential issues with this method, just to let you know, because
a methodology is there so that we can replicate that study
if we had that population. So
one kind of red flag I would
look out for is
if they don't have any limitations,
which every research does have a limitation attached to it,
that starts to raise
A little bit of a red flag to me,
which I'm now taking then to the results, the discussion and then the conclusion eventually.
Yeah, it's taking things with kind of that pinch of salt, just saying, okay, I appreciate that
you,
you know,
like I say, nobody is ever perfect. You know, we all like to think, you know what, I've done a really great piece of research, but I look back at some of the research that I've done before and I think,
do you know what, was it perfect? No.
but that I was able to use as a springboard for future research. I was able to kind of draw on those limitations and use that for future research. So,
you know, the 1st paper that came out was like, yep, here's some things that I probably should have considered
and weren't quite perfect.
And let's have a look at how we start to turn that around, how we start to kind of progress with that. So,
as you say, Tony, it's
really interesting to look at
Like you say, there's limitations, there's potential,
wasn't quite right.
The other thing that you kind of touched upon
as you were kind of talking a little bit around that methodology is sample size.
And it's something that I know, you know, a lot of students have so many questions on, which is,
is sample size appropriate? And
it's, you know, how long is a piece of string sometimes, isn't it? You know, what is an appropriate sample size?
With
quantitative,
there are tools available to us that means that we can,
we can consider whether or not specific information, you know, specific sample size is
maybe more appropriate than other times, you know, and that's linked to a range of different things.
And, you know,
our team are great in terms of being able to talk to students around
what tools are available, what information do we need to make that informed decision?
You know, there's so many different tools.
that you could use.
GPower is one of the main ones that we would probably say, because it's got such an array of
potential analysis that you could
include
or that you could use.
Obviously, stats is such a big topic. There's so many different tests that
nothing covers everything perfectly.
But those tools, you can import some of the information from your paper.
So thinking about things like
what are your kind of cutoffs or thresholds in terms of like alpha level,
what we call power.
And kind of the main one is what we call effect size.
So when we're thinking about statistics,
smaller sample sizes
are usually required when we've got a really big difference.
So if we are comparing between different groups
and we think that there's going to be a really big difference between those groups that we're going to see, then the sample size needs to be much smaller, probably more in line with what qualitative tends to be.
Whereas if we're expecting, you know, really subtle difference between groups that we're comparing,
we need typically a much bigger sample size. So
this is when, like I say, this is when we have to really start to dig into those papers, particularly, say, from a quantum perspective, to
consider what the impact of
what they found is, as well as potentially what they thought they were going to find. And that does, like I say, impact and influence how many people that they will need.
within that within that element.
Yeah, and for those of you doing qual, and you might be thinking, oh, do I need to use G-power and effect size? And no. So qualitative is different.
So, you know, you agree with, okay, they're
genuinely qual just to generalize a population.
You can't really do that with qual, it's more about the nuances of that. But
what's important to know about sample size and sample technique in qualitative is
more for the results. So, for example,
if a qualitative study had an in-depth investigation of five nurses,
and yet in their results and their conclusion, they say all nurses will,
you know, that doesn't match up. If they say in this sample, all of the nurses did X, Y&Z, then that's fine.
But yeah, so when you're looking at qual, it's more looking at how they interpret said results
in that discussion in relation to sample.
What can also say on
methodologies is a big question I often have is,
okay, you've approached it in this way. Is that similar
to other studies?
Or
is everybody approaching this through a quantitative way, but you're coming at this from a qual way? And
you know, what is the pros and cons of that?
If you're all doing the same thing, it helps potentially
you know, compare said results a little bit more easily.
But someone's coming at it from a different angle,
you have to ask why. And is it because it's
offering that nuance or that different? So,
and it's definitely something to consider when it comes to methodologies.
Yeah, like I say, you know, sometimes it's really great to be doing
the same thing to, you know, kind of replicate what other people have done before, but kind of
broadening it out, choosing slightly different topic.
Or actually, is it that do we need to think about things slightly differently? Because that might give us
a slightly different viewpoint on it. And that's okay. I
suppose the other question that I've kind of got, sorry, for you, Tony, is
within
kind of quantitative papers, there's
kind of
strict criteria in terms of the analysis that you do kind of can be done.
You know, for example, if
we're comparing between two groups. Typically, that would indicate that we're going to be using a specific kind of T-test.
In terms of approaches and analysis for qualitative,
how does that kind of work?
So it entirely depends on what
kind of analysis that you're doing. So for example, thematic analysis
is
a tool that's not really governed by any theory,
per se, and there's no real one right way of doing it. So
those you probably come across, those who are doing qualitative, you probably would have at least heard of thematic analysis. And
there are authors, Braun and Clark,
who some would say kind of perfected, progressed
thematic analysis, and they offer
a six and a 10 step process.
But they recognize it's not bound by theory. This is one way of doing that.
Whereas, depending if you're doing like IPA,
or like discourse analysis, it's or even content analysis, there are more structure
to that and there are procedures. So it really does depend
on what
you're using and why. And this all comes back to
you as a qualitative researcher of where you sit
and what you're trying to get out. So for example, discourse analysis, you're looking at,
you know,
what people have said in relation
to theory, to race, to culture, all of that stuff, and there's a set procedure of where you do that, but thematic is more fluid.
And
that's often one of the challenges with qual is there are so many different ways to do it, whereas quant is a lot more restrictive in what you do and how you do it and why you do it.
So yeah,
it
comes back to a piece of string job, it entirely depends what analysis you're going to do.
Yeah, and that is why, like I say, it is sometimes quite a challenge
when
you either think about your own research or other research is
how else could they have done it? And, you know, sometimes you'll start to map that out and you'll end up with, you know,
a million different pages of, oh, they could have done this or they could have looked at this. And then it's
thinking about how do we bring that all together?
You know, I
will absolutely hold my hands up. I am
an absolute scatterbrain and my planning when I'm, you know, looking at
evaluating
journals and things like that,
I am one of those people that ends up with
10, you know,
spider diagrams, mind maps with
all various different thoughts. And then
it takes time, it takes practice to try and start to bring all of that into
a really cohesive and logical argument. Like I say,
I'm like, oh, they could have done this or they might have done this or this is a really good thing.
And
for me, it's allowing yourself to have that freedom to begin with,
but also making sure that we've got that time at the end to
bring it all together. You know, I know that when
you're kind of starting
to be quite critical about articles,
it's very easy to go,
here's a list of things that I think,
and it takes a lot more kind of finessing and practice to bring all of that together so that it feels logical and it doesn't feel like
like I say, like it's a shopping list.
Cool. So we've talked about
the introduction, we've kind of focused a bit on methodology.
So now that we've kind of
weighed up that pros and cons
of that methods, now we're moving on to
results and discussion. What should we look for? This
is my favourite kind of area to start to dig into.
Just because that's where
that's where a lot of the decisions are made when you're writing your own research and a lot of decisions are made about how am I going to present this, you know,
introduction and methodology
are
kind of the background and the context to it.
The results and discussion gives you a little bit more freedom, which is why I love it.
But it also
means that there's potential pitfalls for people when they're writing up those elements as well.
For me,
a key thing to think about or to consider is
that kind of logical kind of element to the results.
From a
stats perspective,
from a quant's perspective, typically, you know, we start the base of the pyramid with
kind of demographic information,
unless that's been included in the methodology
kind of previously.
as well as some of those kind of descriptive statistics before we then get much more specific in terms of those statistical tests and
things like that. So like I say, I always think of it like a pyramid and we get much more specific as we go through, albeit an inverted pyramid in this case, start off broad and then get much more specific to help you to answer
kind of the
specific hypotheses that are being tested. Obviously, we mentioned right at the very start
around
research questions,
But when we're thinking about statistical analysis, we also have this kind of secondary layer, which is what's known as a hypothesis,
which are kind of specific statements that we're testing within the data, which
again is a slight difference between kind of quants and quals is that
every statistical test that you have will have this specific statement of intent or kind of hypothesis that will be tested. And
we need to make sure that
that's
been covered and that all of those tests have been reported.
We can't just
pick and choose the ones that we think are fun
or kind of interesting outcomes.
It's yeah, it's always interesting to think about
have they presented everything that they wanted to look at or have they
only picked the statistical analysis that is statistically significant. So we're saying that there is a difference or relationship.
Have they ignored or not reported those ones that are not statistically significant?
So we're saying that there's no difference. And for me,
those ones are actually more interesting where we're saying, okay, there's no difference there. And
then I start to think about why.
But sometimes we find that in some papers,
some research, they will only
kind of present
the
statistical analysis that is
kind of
statistically significant.
You know, there's been so much research. I think there
was one,
oh God, years ago,
around whether or not antidepressants worked.
So, and there was lots of research
around kind of the effects of antidepressants versus placebo groups.
What actually happened was a lot of the negative results,
the results that said that there was no difference or that it was down to the placebo effect were
maybe not reported. It's what we call publication bias.
And then it kind of gives that
almost like inflated perception of
whether or not things are effective or not.
No matter what would we look for in a quality paper?
In a discussion, we would be looking at are they
drawing in,
obviously within discussions, we'd always say to students when they're writing their discussions, it's about situating your findings, your results within that wider
conversation that's going on around the topic. So how does it connect with other studies that have been carried out?
Do the results kind of
agree with
what's out there
or kind of is other results different from what's out there? And you're also thinking about why that might be the case.
Yeah, because qual is quite different in terms of a qual papers typically doesn't have a results section and
a discussion. They're often combined because your results need to be talked about in situ being the quotes.
And I think, yeah,
Because it's interesting,
they make a say from a quant paper, you kind of have to report everything, all of your hypotheses, all of the tests that you've done. Whereas in qual,
you are,
you might generate 20 themes from your data if you're doing thematic analysis,
but you might only talk about four.
And those four might not even be the most common.
There it's the fact that those four help
drive home the point of your
research question.
So often you question,
Okay, you've talked about this and you've linked it to literature, but what else did you potentially not
talk about or what other themes are at play that you didn't kind of talk about? And
that's
partly a beauty of QAL, you know, you can go into such depth, but
yeah, you do have to question that. But as Emma said, absolutely spot on. When you read that QAL paper,
you know, are they just presenting
their quotes
or are they saying
this is linking to these studies, which adds validity?
And are they also having the why?
Are they offering with research why their participants said
what
they said?
Because
for yourself, Michaela, when you have a separate results section in your discussion,
you do also have a why, don't you, your discussion essentially why their results are, but they are
slightly disconnected compared to a call.
Yeah, it's from, like
I say, I would say this because of my background, but from my perspective, that just feels more logical to me, which is like, you know, I've looked at this element.
This is,
this is what I think about it. This is kind of contextualizing it into that real world. What does it actually mean?
You know, it depends on your subject as well in terms of what that could look like.
You know, something that
maybe has kind of, if you're in one of the like medical fields or biomedical fields, it might have clinical implications.
And that might be what you
would
kind of focus on and consider. Or if you're in,
you know,
economic functions, then it might be that
actually this is something that could take that you could take forward and move into an economic model. So
it can really
be quite subject specific in terms of how that looks and some of those future recommendations that can be made and the discussions that are made based on that.
So now moving on to kind of the...
which I often find is one of the most important parts of any research piece,
and I often say to students, read the introduction and then go straight to the conclusion,
is
that tying it all together and how papers
do that. So what should we be looking for? I
guess it's
have they
answered
their research question?
Have they kind of met
their aims?
Because obviously you
As you said, the conclusion is so important. It's like,
it's like leading you through, you know, everything that's come before is leading you through to this conclusion. And by the time you as a reader get to that,
you
should be convinced,
you should be convinced by what's been presented to you.
Yeah. And I think
for me, the
look out for in the conclusion is
my big one for me is, are they being
over
generalizing in the
in their
conclusion. I see it quite often. And
this comes from both stats papers and qual where
some of the evidence presented, and
then they kind of
run with something and really make it overzealous. So I used that example there, because I'd seen a paper recently that, as I said,
all nurses get fatigued when blah, blah, blah.
And actually the sample size
wasn't enough
to make that recommendation that
that is the case.
That's something
I often kind of look out for.
And
the other thing I often look out for in a conclusion is
have those authors
summarize the key information in relation to exactly what Emma said? Have they actually
answered the question they set out?
But are they also offering a springboard
for over research saying,
okay, we found this
and I think we should now look
at X or
you know, further research needs to be done on this because we couldn't do that. But we feel from our discussion
that had an impact on those nurses, for example. So
we need to look at X now and move that forward. So I think that's often a really good conclusion that you're summarising,
but they offer,
we recognise an imitation of this study or something we haven't covered. It'd be nice if the field started to look at
this. And that's how research progresses. So for me, if
If I see a conclusion that's just,
oh, look, our results are great. We are right. This
is what we've done. Yeah, red flag for me. So
you kind of don't want it neatly boxed off. You want,
this is what could come next. Absolutely. Absolutely.
Yeah. So if we're thinking, you know, if you do see that somebody has,
let's say, kind of
drawn a line and said, yep,
this is great. No more kind of nothing more to kind of talk about.
What could you potentially say about that if we are starting to be quite critical about it? How do we approach
being like, oh,
they haven't necessarily looked towards the future?
Well, for me, my big thing is, why are you not doing that?
So you're not being open to be questioned or to recognise
a limitation at play.
And I don't think any researcher that's worth their salt can categorically say hand on heart, we have covered
this entire topic in that much detail that nothing else needs to be done again.
So yeah, whenever I do see that, I'm like,
what are you trying to, what are you
trying to hide? Because you're not fooling
anybody
in that sense. So
we
went kind of like
in depth, which is obviously what we want when evaluating journal article. But I was thinking
like when you initially are thinking about or trying to kind of gather your thoughts about a journal article, you
might get quite high level and think about the crap test.
So you might,
what did you say?
Ah,
yes.
So you might,
you know, just ask kind of like simple questions of the article that can kind of lead to that kind of those kind of like deeper,
deeper evaluations. So
firstly, you might think about the currency of the article. So is it
dated? Is it a recent article?
or is it an older piece of research that, you know, might be a core piece of research that other researchers springboarded from?
So question the kind of timeliness of it.
The relevance,
so is
it relevant to
the question that you're trying to answer
or the topic that you're exploring?
Then we've got authorities. So who's written
the, you know, who's carried out the study, who's written the article? What kind of
qualifies them to have kind of been able to carry out this research.
Then we've got, am I up to accuracy now?
So
is the research reliable? So these are the questions that we've kind of been talking about, all the bits that you might start to kind of pull apart a little bit. So is it accurate? Is it reliable?
And then the purpose, it's really important to think about why the study was carried out
or why the article was written.
What was the kind of the purpose of that?
And I guess that's where you might think about bias as well. Like, you know, who's funded the research, who's published the research? Michaela, and I always use this example now,
you said that there was an article on the benefits of the health benefits of smoking.
And then the small print said that
the research had been funded by a cigarette company. So we see that's, you know, I
love bias
in that one.
There is, yeah, absolutely.
So that's kind of something that you might, the crap test is something that you might use
when initially trying to kind of
get to kind of grips with whether the article is going to be useful to spend more time with and evaluate.
Yeah, that crap tool is
really, really important, I would say. And sometimes it's a requirement. Is there some assignments we see that
they use it to evaluate it.
But yeah, it's a great tool. I think even if it's not a requirement, use it.
And I guess the other thing is question everything.
And that's what we're always encouraging students to do. It's question everything. Don't take anything at face value. And it
is something that,
you know, when we're talking about
looking at journal articles, looking at methodology procedures,
as a student listening, you might be thinking, how am I ever going to get to that point?
And you will.
The more you engage with this, the more you start looking at different journals,
you start having those questions, you'll start to see those emerging patterns of what
looks like a good paper.
or, you know, these things that they're saying, and it starts to become a natural
process. So like now,
you know, I don't consciously think about this. You just kind of know,
because you
live, I've lived in this field for quite a while now, and I read all the journal articles.
So a student, you might feel like this is a lot of cognitive tasks to go through, and it is, but that will get quicker. The more you engage with that,
the better. And actually,
that will bleed into everything else you do in everyday life. If you're critical of journal articles, you're going to be sort of critical of everything else that's out there and you have those questions. And that's really what we want our graduates to be, is to
be critical.
And that's why those who are at level 6 and definitely at level 7,
you'll see your tutors, you'll see our team constantly pushing
elements of criticality.
And within that as well, you know, it does take practice, it does take time, as we've said before. And I know that there are
like different templates to help you get going. You know, I know that our team have got various different approaches to
how to break down. Do you break it down into some of the sections that we've spoken about across this podcast, you know, thinking about introduction, methodology, results,
and starting to
kind of have that template to start to put some of your ideas and thoughts down,
just so that it's, like I say, it just helps the process. It makes things a little bit easier.
Yeah, I guess it's finding your own kind of strategies for that. As you said, it could be a template, you know,
It could be,
I know, friends who've got Excel documents,
Excel,
Excel documents where spreadsheets is what I'm trying to say, where they
have all the kind of different components of a journal article and they write that kind of critical thoughts and make links in those.
So it's finding what works for you, but
the crux of it is questioning everything.
And as you said before, I think that crap tool is such a great place to start and to hang
information off.
So thank you both for joining me today on this episode. Bye. Bye. Bye.
Hi there. If you're a University of Chester student, these are ways that you can access support from your academic skills team. You can access our Moodle pages
via the green training and skills tile tab on Portal.
On here, you'll find a wealth of information.
discussing variety of different skills such as referencing to planning to writing.
You can send an extract of your work to our feed forward e-mail assistance service by emailing ask at chester.ac.uk.
You can send us 750 words or three paragraphs per assignment and
an academic skills advisor will get back to you within three working days with generic and developmental feedback on aspects such as
criticality, paragraph structure and referencing.
You can also use our one-to-one service.
Here you will book on our system and meet with an advisor for around about 30 minutes, be that online or in person,
depending on your preference, where the advisor will meet you and discuss any skills-related issues you have,
and also talk through the comments that they've made on your work to help you progress in your academic studies.
If
you and a group of your course mates or friends are struggling with the same academic skill,
then you can book an Ask Together session.
And you can do this by emailing ask at chester.ac.uk with details of the skills that you want to talk about,
how many people are in your group and your availability. We can look to arrange a spoke session with an academic skills advisor.
Of course, you've got the skills pod.
And if there are any topics that you'd like us to cover or suggestions, or even if you'd like to get involved with the skills pod, drop us an e-mail at ask at chester.acer.uk. Ask.
Supporting your success.