Frontiers in Education 01 frontiersin.org
OER interoperability educational
design: enabling
research-informed improvement
of public repositories
MartaRomero-Ariza
1
, Ana M.Abril Gallego
1
,
AntonioQuesadaArmenteros
1
and
PilarGemaRodríguezOrtega
2
*
1
Department of Didactics of Science, University of Jaen, Jaen, Spain,
2
Department of Specific Didactics,
University of Cordoba, Cordoba, Spain
Introduction: According to UNESCO, open educational resources (OERs) could
be tools for meeting Objective for Sustainable Development 4, as long as they
have the appropriate characteristics and sucient quality to promote citizen
education.
Methods: This work presents a quality analysis of OERs in a public repository
using mixed methods techniques and a participatory approach.
Results & Discussion: Though the quantitative results show high mean values in
all the dimensions, the qualitative analysis provides a better understanding of how
key stakeholders perceive particular aspects and how we can take a step forward
to enhance usability and improve OER psychopedagogical and didactic design.
The triangulation of information from dierent sources strengthens consistency
and reliability and provides a richer perspective to inform future work.
KEYWORDS
open educational resources, interoperability, educational design, quality analysis,
research-based improvement
1. Introduction
Open educational resources (OERs) have been envisioned by the United Nations
Educational, Scientic and Cultural Organization (UNESCO, 2019) as promising tools for
meeting the Objectives for Sustainable Development (OSD) collected in the 2030 Agenda,
specically those dealing with the promotion of high-quality, equitable and inclusive education
for all (i.e., OSD4). In this sense, UNESCO has already claimed that, due to their free nature and
capability to transform education, OERs may play a key role in ensuring inclusive, equitable and
quality education and promoting lifelong learning opportunities for all; therefore, they must
beexploited. However, their eective pedagogical application (and derived learning outcomes)
is linked to their quality in terms of content, design, adaptability, usage, etc. Beyond that, in an
attempt to stay competitive in global education, university are focusing on digital transformation
strategies that imply (among other things) the design and implementation of OERs (Mohamed
Hashim etal., 2022). erefore, UNESCO has also stated the need to develop specic research
strategies focused on the quality evaluation of existing OER repositories. us, this study
responds to experts’ claims about the need to systematically evaluate the quality of OERs
(UNESCO, 2019) and, previously, the necessity to address the validity and reliability issues of
OPEN ACCESS
EDITED BY
Tom Crick,
Swansea University, UnitedKingdom
REVIEWED BY
Lawrence Tomei,
Robert Morris University, UnitedStates
Hai Wang,
Saint Mary’s University, Canada
*CORRESPONDENCE
Pilar Gema Rodríguez Ortega
RECEIVED 28 October 2022
ACCEPTED 19 June 2023
PUBLISHED 14 July 2023
CITATION
Romero-Ariza M, Abril Gallego AM, Quesada
Armenteros A and Rodríguez Ortega PG (2023)
OER interoperability educational design:
enabling research-informed improvement of
public repositories.
Front. Educ. 8:1082577.
doi: 10.3389/feduc.2023.1082577
COPYRIGHT
© 2023 Romero-Ariza, Abril Gallego, Quesada
Armenteros and Rodríguez Ortega. This is an
open-access article distributed under the terms
of the Creative Commons Attribution License
(CC BY). The use, distribution or reproduction
in other forums is permitted, provided the
original author(s) and the copyright owner(s)
are credited and that the original publication in
this journal is cited, in accordance with
accepted academic practice. No use,
distribution or reproduction is permitted which
does not comply with these terms.
TYPE Original Research
PUBLISHED 14 July 2023
DOI 10.3389/feduc.2023.1082577
Romero-Ariza et al. 10.3389/feduc.2023.1082577
Frontiers in Education 02 frontiersin.org
the evaluation instruments, previously adapted and applied (Yuan and
Recker, 2015).
Within a wider study about OERs and their impact on teaching
and learning, this paper presents the results of the quality analysis of
the OER repository oered by the National Centre for Curriculum
Development through Non-Proprietary Systems (CEDEC), dependent
on the Spanish Ministry of Education. e educational resources
analyzed have been developed within the national project named
EDIA (from the Spanish name “Educativo, Digital, Innovador y
Abierto”), which fosters the creation of innovative open digital
resources that allow the intended educational transformation.
In particular, the present work intends to respond to the following
research questions (RQ):
RQ1; What is the quality of EDIA OERs in terms of
interoperability, psycho-pedagogical and didactic design and
opportunities for enhanced learning and formative assessment?
RQ2; Which aspects might beimproved?
To respond to these questions, we will use a participatory
approach involving teachers and OER experts and draw on
quantitative and qualitative data gathered using dierent instruments,
unveiling a rich picture that goes beyond the evaluation of technical
features related to OER interoperability and usability and oers an
interesting landscape to discuss key psycho-pedagogical and didactic
aspects, with some implications for the improvement of OER design.
OERs might be dened as learning, teaching and research
materials in any format and medium that reside in the public domain
or are under copyright that have been released under an open license,
permit no-cost access, re-use, re-purpose, adaptation, and
redistribution by others (UNESCO, 2019). According to this
denition, OERs provide teachers with legal permissions to revise and
change educational materials to adapt to their needs and engage them
in continuous quality-improvement processes. erefore, OERs
empower teachers to “take ownership and control over their courses
and textbooks in a manner not previously possible” (Wiley and Green,
2012, p. 83). e unique nature of OERs allows educators and
designers to improve curriculum in a way that might not bepossible
with a commercial, traditionally copyrighted learning resource
(Bodily etal., 2017). Moreover, and although many institutions are
promoting the adoption and creation of OER, they are still lacking in
the policies and development guidelines related to their creation
(Mncube and Mthethwa, 2022). However, to make the most of the
opportunities oered by OERs, it is necessary to ensure a quality
standard for the resources by implementing proper quality assurance
mechanisms. In response to these concerns, UNESCO recommends
encouraging and supporting research on OERs through relevant
research initiatives that allow for the development of evidence-based
standards and criteria for quality assurance and the evaluation of
educational resources and programs. In response to this claim, some
research studies have even proposed specic frameworks aimed at
assessing the quality of OER (Almendro and Silveira, 2018).
Literature reviews (Zancanaro etal., 2015) and recent research on
OER-based teaching (Baas etal., 2022) show that the main topics
covered in the OER literature are related to technological issues,
models of businesses, sustainability and policy issues, pedagogical
models, and quality issues, as well as barriers, diculties, or challenges
for teacher’s adoption and OER use (Baas etal., 2019). On a wider
scope, some authors discuss the current impact of technology on key
cognitive aspects such as attention, memory, exibility and autonomy
and claim for an eective and pedagogical use of technological
resources (Pattier and Reyero, 2022).
e analysis of students’ online activities, along with their learning
outcomes, might be used to understand how to optimize online
learning environments (Bodily etal., 2017). Learning analytics oers
amazing opportunities for the continuous improvement of OERs
embedded in online courses. However, Bodily etal. (2017) state that
despite this claim, it is very hard to identify any publications showing
results from this process, and there is a need for a framework to assist
this continuous improvement process. In response to this need, they
propose a framework named RISE for evaluating OERs using learning
analytics to identify features that should be revised or further
improved. Pardo etal. (2015) argued that learning analytics data does
not provide enough context to inform learning design by itself. In
agreement with this idea, Bodily etal. (2017) clarify that the RISE
(Resource Inspection, Selection, and Enhancement) framework does
not provide specic design recommendations to enhance learning but
oers a means of identifying resources that should beevaluated and
improved. e RISE framework is supposed to provide an automated
process to identify learning resources that should beevaluated and
either eliminated or improved. It counts on a scatterplot with resource
usage on the x-axis and grade on the assessments associated with that
resource on the y-axis. is scatterplot has four dierent quadrants to
nd resources that are candidates for improvement. Resources that
reside deep within their respective quadrant should be further
analyzed for continuous course improvement. e authors conclude
that although this framework is very useful for identifying and
selecting resources that are strong candidates to berevised, it is not
applicable to the last phase of the framework intended at making
decisions about how to improve them. is last phase requires
in-depth studies that combine quantitative and qualitative data and
involve experts in learning design. In this line it is worth mentioning
the work of Cechinel etal. (2011) where the complexity of assuring a
standard of quality inside a given repository was stated, given the fact
that it involves dierent aspects and dimensions such as quality
regarding not only the content, but also, its eectiveness as a teaching
tool, usability, among others.
Recently, Stein et al. (2023) analyzed, in the context of the
evaluation of the general quality of the Merlot
1
repository, to which
extent dierent raters tended to agree about the quality of the
resources inside it. Also, the work of Cechinel etal. (2011), which
analyzed the characteristics of highly-rated OER inside Learning
Object Repositories (LOR) to take them as priori indicators of quality,
provided meaningful insights regarding the assessment process for
determining the quality of OER. As a result of their study, they found
that some of the metrics presented signicant dierences between
what they typied as highly rated and poorly-rated resources and that
those dierences were dependent on the discipline to which the
resource belongs and, also, on the type of the resource. Besides, they
found dierent rating depending on the typology of rater (i.e.,
1 https://www.merlot.org
Romero-Ariza et al. 10.3389/feduc.2023.1082577
Frontiers in Education 03 frontiersin.org
peer-review or user evaluation). ese aspects are of important
consideration for the design of any quality evaluation process of an
OER repository.
Moreover, pedagogical issues have no sense without looking at
teachers. ey are oen key targets in the OER literature, and some
authors argue that teachers should not only accompany but also drive
the change toward openness in education as crucial players in the
adoption of the OER paradigm. Along this line, Nascimbeni and
Burgos (2016) draw attention to the necessity of teachers who
embrace the OER philosophy and that makes open education their
signal of identity. In this sense, they provide a self-development
framework to foster openness for educators. e framework focuses
on four areas of activity of an open educator—design, content,
teaching, and assessment—and integrates objects, tools, teaching
content and teaching practices. Aligned with Nascimbeni and Burgos
(2016), weconsider open educators as crucial players and give them
a voice in the evaluation of OERs as key stakeholders. For this reason,
weadopt a participatory approach in which weactively engage open
educators in the piloting of instruments and in the evaluation of
OERs. We advocate the potential of OERs to promote teacher
professional development and the importance of developing
mechanisms to create communities of practice and networks of OER
experts, as well as to properly recognize OER creation as a
professional or academic merit. As teacher educators, we are
especially interested in the pedagogies underlying the design and use
of OERs and envision open education as a powerful eld for teacher
professional development.
Our research study has been focused on the so-called EDIA
project, which originated around 2010 and constituted a national
initiative promoted by the Spanish Ministry of Education to promote
and support the creation of dynamics of digital and methodological
transformation in schools to improve student learning and encourage
new models of education across the country. It oers a repository of
educational content for early childhood, primary and secondary
education, as well as vocational studies. e OERs are designed by
expert creators and active teachers; therefore, they are based on
current curricular references. e main feature of the EDIA OERs
relates to the methodological approaches that characterize their
proposals. ese are linked to active methodologies, such as problem-
based learning and ipped learning, among others, and to the
promotion of digital competences in the classroom. Another
important feature of the EDIA project is its continuous evolution, both
through the incorporation of new professionals and the incorporation
of improvements into the digital tools used for the creation of
resources. is is reected in the free repository accessible through its
website, which incorporates new resources and associated materials
(rubrics, templates, etc.). Additionally, in the context of the EDIA
community, networks of teachers who dialogue about the application
of resources in the classroom and the use of technology have been
generated. is virtual cloister constitutes a framework for
experimentation to propose new models of educational content that
develop aspects, such as accessibility, and issues, such as gender
equality and digital citizenship, among others.
e EDIA project ts positively into what it is expected for an
OER repository that oers suitable informational ecosystems and
appropriate social and technological infrastructures (Kerres and
Heinen, 2015). Some of the key elements for promoting OER-enabled
pedagogies are the EDIA network, the annual EDIA meetings, and the
shared use of eXeLearning soware.
2. Materials and methods
e analysis of the open resources from the EDIA repository has
been made using a participatory approach involving 48 evaluators in
the application of two previously adapted, rened, and validated
instruments for OER quality assurance. e quantitative data obtained
were triangulated and enriched with the content analysis of the
comments for improvement received from the evaluators involved.
e sample contains the OERs included in the EDIA repository, 70
OERs including all subjects of the Spanish curricula from K-6 and
K-12 levels (Figure1).
2.1. Profile of OER evaluators
Evaluators were purposefully selected according to their
background and teaching experience; those excluded were possible
authors of the OERs under evaluation. A total of 48 teachers (54%
male, 46% female), with a prole of teaching experience ranging
between 4 and 35 years, participated in the review process of the EDIA
OER evaluation. Overall, more than 87% had up to 10 years of
teaching experience, and 32.6% had more than 20 years of experience;
the average experience was 18.5 years. To check reliability, some OERs
were evaluated simultaneously by two or three independent evaluators
(see Table1).
2.2. Quantitative research
OERs were evaluated through two instruments developed and
validated by educational researchers in the context of the evaluation
of learning objects, such as the instrument known as HEODAR
(Orozco and Morales, 2016) and rubrics from the Achieve OER
Evaluation tool (Birch, 2011), using a translated and adapted version
for this research.
In the case of HEODAR, weused a short version with two scales.
e original instrument (Orozco and Morales, 2016) was developed
and validated using four scales: psycho-pedagogical scale (E1),
didactic scale (E2), interface design scale and navigation scale. e E1
and E2 dimensions were mainly focused on aspects related to teaching
and learning processes, which are the focus of this research. e
instrument developed for this study applies a 5-point Likert-type scale
ranging from a value of 1 for the quality of the resource under
evaluation (i.e., very poor) to a value of 5 corresponding to the
maximum value (i.e., very high quality). e questionnaire also
includes an item option that states for “not applicable.
In addition, in this adapted version, weincluded an additional
item formulated under the heading of “Global assessment of the
resource,” which has been phrased as “Score the global quality of the
OER (from 1-very poor to 5-very high) and write explicitly the
indicators used to assign that score.” e inclusion of this item
responds to the need of deepening the analysis and
allowing triangulation.
Romero-Ariza et al. 10.3389/feduc.2023.1082577
Frontiers in Education 04 frontiersin.org
Table2 shows the Cronbachs alpha values obtained in this study
for the subscales described in the original article, which, in comparison
with the corresponding values reported in the original article (see
footnote in Table 2), could be considered more than acceptable
considering our sample. Results from preliminary exploratory factor
analysis (EFA) are also coincident for explaining variance and the
number of theoretical factors in the dimensions studied when
compared to the original study.
us, the scales dened within the HEODAR instrument are
as follows:
• HEODAR global scale, which establishes an overall mark (from
1 to 5) to the OER
E1 (psycho-pedagogical scale), which establishes the mean value
for the marks assigned to items Q1–Q3 and Q5–Q10
E2 (didactic scale), which establishes the mean value for the
marks assigned to items Q11–Q16, Q18–Q30.
Additionally, the Q32 item (an ad hoc item that provides the mean
value for the OER based on the values assigned to the rst 30 items in
the HEODAR instrument) was created to measure internal
consistency per evaluation (by comparing against the value assigned
to Q31—“Global assessment of the resource”).
Moreover, the Achieve instrument (Birch, 2011) consists of a set
of eight rubrics developed to carry out online OER assessments.
Weused a translated version of the simplied instrument (Birch,
2011). Rubrics associated with the instrument refer to eight
dimensions: OER objectives (OBJ), quality of the contents (QCO),
usefulness of the resources/materials (UTI), quality of the evaluation
(QEV), quality of technological interactivity (QTI), quality of
FIGURE1
Sample distribution of EDIA OERs considered in this study.
TABLE1 Specifications related to the sample characteristics: (1) number of OER evaluated per subject and number of evaluations per subject and (2)
distribution of the OER evaluated as a function of educational stage.
Subjects (1) N of OER Relative frequency (%) N of evaluations Relative frequency (%)
Biology and geology 3 4.3 4 4.0
Physical education 2 2.9 4 4.0
Physics and chemistry 1 1.4 1 1.0
Professional development 2 2.8 2 2.0
Geography and history 10 14.3 14 14.0
Foreign language 20 28.6 24 24.0
Interdisciplinary 12 15.7 15 15.0
Literacy 1 1.4 2 2.0
Language and literature 12 17.1 22 22.0
Mathematics 7 10.0 10 10.0
Sociolinguist 1 1.4 2 2.0
Total 71 100 100 100.0
Educational stage (2) N of OER Relative frequency (%)
Primary education 14 19.7
Secondary education 53 74.6
Pre-university 2 2.8
Vocational studies 2 2.8
Total 71 100.0
Romero-Ariza et al. 10.3389/feduc.2023.1082577
Frontiers in Education 05 frontiersin.org
exercises/practices/tasks/activities (QEX), deep and meaningful
learning (DML) and accessibility (ACC). Rubric ACC should
beapplied only if evaluators are experts in this category, so this last
dimension showed the most missing data, probably due to the lack of
expertise of evaluators in technical issues related to accessibility.
According to the authors, these rubrics would beused to rate the
eective potential of a particular OER in each learning environment.
Each rubric can beused independently of the others by using scores that
describe levels of potential quality, usefulness, or alignment with the
standards. e original version uses a score from 3 (superior) to 0 (null).
In the version developed for this study, the scale has been simplied to
3 (superior), 2 (limited) and 1 (weak), including the “neutral” item
(N/A, it is not applicable or “I cannot evaluate this dimension”).
Table 3 shows the rst dimension of this instrument and the
evaluation of the corresponding OER objectives (OBJ), including the
explicit criteria related to any quality level. e whole rubric is shown
in Supplementary Table S1.
2.3. Qualitative research
Two independent researchers/encoders carried out the qualitative
analysis. Out of the 105 compiled documents, 90 included qualitative
data that contained explicit information regarding a plethora of aspects
related to the quality of the resources. According to the quantitative
instrument, 14 native categories were dened, which could
be considered positive or negative; therefore, we considered 28
subcategories. See Supplementary Table S2 to check the list of categories
and how they were dened. Each encoder worked independently with
MAXQDA 2020 (VERBI Soware, 2020). e intercoder agreement
analysis involved checking (i) the presence of the code in the document,
(ii) the frequency of the code in the document, and (iii) the coding
similarity. us, the qualitative analysis was developed in three phases
to obtain consensus; the results are shown in Table4.
3. Results
3.1. Characteristics of the sample
e instruments described above were applied to the analysis of
100 reports derived from the evaluation of 71 dierent OER belonging
to the EDIA project, which correspond to dierent educational levels:
74% applied to secondary education, 20% to primary education, 2.8%
to pre-university stage and the remaining 2.8% to vocational studies.
Table1 provides data in relation to the subject domain and educational
stage of the OER considered in the sample.
3.2. Quantitative study
It should benoted that the good internal consistency of the data
obtained through the instruments are kept for a random selection of
a set of evaluations. Next, wesummarize the most remarkable results
obtained through the quantitative analysis.
3.2.1. HEODAR instrument
Considering the complete set of data (with no missing cross values
in any of the items within each scale), analysis from the HEODAR
data showed the following results (Table5).
e results concerning the mean values of the global HEODAR
scale (i.e., overall evaluation) and Q32 item reveal that the EDIA OER
are considered of good or very good quality. Approximately 84% of
the evaluations scored are equal to or higher than 4. Moreover, among
the 88 total evaluations considered, only 14 were marked with values
below 4 (the value 3 being the lowest mark assigned to one specic
OER). e analysis of the mean/mode values given to the four global
scales reveals a good internal consistency of the evaluation performed
per evaluator. at is, the mean values assigned to item Q32 show no
signicant dierence from the corresponding means in the psycho-
pedagogical (E1) and didactic (E2) scales. is reveals a robust
TABLE2 Reliability statistical descriptors for the HEODAR scales and
subscales used.
Cronbach α N of items Item ID
HEODAR*
0.943
E1 scale (psycho-
pedagogical)
0.804
MOT (motivation
and attention)
0.557 3 Q1, Q2, Q3
DIF (diculty) 0.671 2 Q5, Q6
INT (interactivity) 0.867 2 Q7, Q8
CRE (creativity) 0.728 2 Q9, Q10
E2 scale (didactic) 0.921
CONTX (context) 0.513 2 Q11, Q12
OBJ (objectives) 0.762 4 Q13–Q16
CONTN (content) 0.839 8 Q18–Q25
ACT (activities) 0.803 5 Q26–Q30
*Cronbach α for this subscale in HEODAR E1 original article: 0.918. Cronbach α for this
subscale in HEODAR E2 original article: 0.941. e “isolated” items do not form subscales
and have been omitted from this table (Q17, Q31). erefore, item Q4 has been omitted
from this study, since 32% of the data were missing.
TABLE3 Criteria definition for the “Objectives” category of the Achieve OER Evaluation tool.
Objectives 3—Superior 2—Limited 1—Weak N/A
Alignment with curriculum
objectives/standards
e OER proposes or addresses
(explicitly or implicitly) objective/s/
standards in an adequate/
comprehensive manner and aligned
with the curriculum
e OER proposes or
addresses objectives/
standards (explicitly or
implicitly) in an appropriate
way but only partially
aligned with the curriculum
e OER does not propose
or address objectives/
standards (explicitly or
implicitly) or these are not
aligned with the curriculum
is dimension is not
applicable or cannot
beassessed in this OER
Romero-Ariza et al. 10.3389/feduc.2023.1082577
Frontiers in Education 06 frontiersin.org
coherence of the evaluators applying the HEODAR questionnaire,
showing the high reliability of the data obtained via this instrument
and by the group of evaluators involved. Table6 shows the values from
the psycho-pedagogical and curricular aspects and their subscales.
In agreement with its global value, all the corresponding
subscales in the E1 scale are valued over 4.0 with the sub-scale about
interactivity (INT) being the worst considered within this group.
However, item Q1 within the subscale MOT (which refers to the
general aspect and presentation of the OER) is the one receiving the
lowest mark (3.81), which is even below the mean value of the scale.
Within the MOT subscale, item Q3 exhibited higher values than
the rest of the items in the subscale. is item refers to students
engagement with the resource, the student’s role and what they must
do according to the information provided by a particular OER. is
result is consistent with the qualitative analysis in the category
“Motivation and Attention,” with a frequency of 5.2% (Table7) being
75% of the comments related to positive aspects. Within this category,
wehave included allusions such as “the theme pulls o the attention
of the students” or “It is a sensational OER, very motivating and with
a great deal of support.
Similarly, none of the corresponding subscales within the scale
related to didactic aspects receive a signicantly lower mark than the
mean value for the scale (4.32). It is the subscale CONTENT that
obtains a slightly lower value, with the case of items Q21 and Q22
being remarkable (3.93 and 3.98, respectively):
Q21: Allowing the students to interact with the content using
the resource.
• Q22: Providing complementary information for those students
interested in widening and deepening their knowledge.
is specic result associated with the type of learning will
bediscussed later. However, on the scale, this aspect has been well
valued (4.24). Even in qualitative analysis, the content category
exhibits one of the highest frequencies (12.40%, Table7), with 61.3%
of the comments coded within positive aspects.
3.2.2. ACHIEVE instrument
Table8 shows the mean values and the corresponding standard
deviations of the nine dimensions included in the Achieve instrument.
Each dimension was evaluated using a rubric describing three
dierent levels of achievement.
First, it must benoted that the dimension evaluating accessibility
(ACC) was le blank by almost half of the evaluators, obtaining only
56 evaluations of the total sample (N = 101). is fact is aligned with
TABLE4 Results of the intercoder agreement (%) after the 3 stages.
1st stage 2nd
stage
3rd
stage
3rd
stage
Presence of the
code
96.0 96.0 100.0 100.0
Frequency of
the code
4.0 4.0 100.0 100.0
Coding
similarity*
41.4 64.9 100.0 99.2
*e coding similarity has been calculated for a 75% overlap range between selected
segments in the 1st, 2nd and 3rd phase and for a 90% of overlap range in the 3rd phase.
TABLE5 Mean values and corresponding standard deviations (SD)
obtained for the global scales included in HEODAR and Q32.
Mean value SD
HEODAR 4.32 0.50
E1 4.32 0.51
E2 4.32 0.52
Q32 4.17 0.61
TABLE6 Mean values and corresponding standard deviations (SD)
obtained for the E1 and E2 global scales and sub-scales included in the
HEODAR instrument.
Mean
value
SD Mean
value
SD
E1 scale (psycho-
pedagogical)
4.32 0.51 E2 scale
(didactic)
4.32 0.52
MOT
(motivation and
attention)
(N = 96)
4.25 0.56 CONTX
(context)
(N= 97)
4.40 0.67
Q1 3.81 0.92 Q11 4.32 0.83
Q2 4.34 0.75 Q12 4.48 0.79
Q3 4.61 0.60 OBJ
(objectives)
(N= 97)
4.47 0.52
DIF (diculty)
(N = 96)
4.22 0.74 Q13 4.47 0.68
Q5 4.23 0.80 Q14 4.43 0.73
Q6 4.20 0.90 Q15 4.52 0.64
INT
(interactivity)
(N = 93)
4.09 0.84 Q16 4.43 0.66
Q7 4.05 0.93 CONTN
(content)
(N= 86)
4.24 0.65
Q8 4.13 0.87 Q18 4.19 0.96
CRE (creativity)
(N = 99)
4.47 0.65 Q19 4.47 0.74
Q9 4.46 0.71 Q20 4.37 0.79
Q10 4.47 0.74 Q21 3.93 1.12
Q22 3.98 1.14
Q23 4.50 0.82
Q24 4.17 0.98
Q25 4.29 0.88
ACT
(activities)
(N= 90)
4.39 0.57
Q26 4.35 0.74
Q27 4.46 0.74
Q28 4.15 0.86
Q29 4.47 0.75
Q30 4.52 0.78
Romero-Ariza et al. 10.3389/feduc.2023.1082577
Frontiers in Education 07 frontiersin.org
a low frequency for the corresponding category in the qualitative
analysis (1.03%, Table7), suggesting that this dimension is somehow
neglected by the evaluators, either for lack of information or expertise
in the area.
Most of the remaining items received a mean value above 2.0 out
of a 3-point evaluation, which denotes the good quality perceived for
the EDIA OER. is is in line with the previously described results
obtained from the HEODAR instrument. Although the general
qualication is good (2.75 out of 3), several aspects must bestated.
On one side, objectives, contents, and learning are among the best
valued (2.87, 2.84, and 2.77, respectively), which is endorsed by the
qualitative analysis where the homonymous dimensions have
received a remarkably larger amount of positive than negative
feedback (81.8, 61.3, and 86.0%, respectively). On the other hand,
interactivity and accessibility are among the lowest valued (2.55 and
2.54, respectively), which is properly supported by the qualitative
data, where the homonymous categories have received a sensibly
larger amount of negative than positive comments (62.5 and 75.0%,
respectively).
3.3. Qualitative analysis
e categories used for the content analysis of the open feedback
provided by the evaluators are aligned with themes included in the
items of both quantitative instruments, aording an in-depth view of
these issues. Table7 shows that most comments received were coded
under the categories related to the utility of the resources, the quality
of the contents, how the learning was orchestrated, and the learning
activities suggested. Next, weprovide a brief overview of concrete
disciplinary areas. For the sake of simplicity, the disciplinary subjects
will begrouped into four main areas: (1) Language (language and
literature, foreign language, literacy and sociolinguist), (2) Social
Sciences (geography and history), (3) Sciences (biology and geology,
physics and chemistry and mathematics), and (4) Other (physical
education, professional development and interdisciplinary). Since
there are very dierent amounts of OERs per area (Table1), weare
cautious with data. However, the asymmetrical OER distribution
could bea result per se. In this sense, the most remarkable aspect is
related to the fact that scientic subjects are underrepresented, with
the number of Natural Sciences OERs being just about 6% of the total
sample. Figure 2 shows the frequency of positive and the to
beimproved aspects in each of the above-mentioned four areas.
As shown, Sciences is the area with the largest number of positive
comments; however, if wedisclose this subset, weobserve that Natural
Science subjects are the ones with the poorest evaluations (biology and
geology and physics and chemistry, with 71.7 and 100%, respectively),
while only 23.3% of the feedback for the maths OERs were negative.
Considering the whole set of OERs, the best ranked were those
corresponding to physical education (included in “Other”) and
literature (included in “Language”), with a higher positive/negative
ratio of the whole set of evaluated OERs. With the aim of identifying
strengths and improvement aspects of the repository, Figure3 shows
the frequency of comments received in each category.
Most valued aspects of the repository are related to features included
in the categories learning, contents, and objectives, whereas the OERs
utility and timing and schedule (i.e., temporalization) seem to be a
characteristic to amend/improve. Check Supplementary Table S2 to
identify the descriptors included in each category.
Moreover, the analysis per subject shows that the Language OERs
have more than 60% comments for improvement on psycho-
pedagogical aspects, such as context, design, diculty level and
assessment/feedback. Similarly, the “Social Sciences” resources could
also be improved in psycho-pedagogical aspects (more than 40%
comments for improvement in motivation or learning). On the other
hand, scientic OERs have been mostly negatively evaluated in
relation to didactic aspects, such as objectives or activities (more than
30% in both cases) and psycho-pedagogical ones (utility, 39.1%). Note
that “Foreign Language” concentrates all the negative feedback of the
category accessibility simply because this aspect was only explicitly
evaluated for those OERs.
Hereinaer, we will draw on results obtained to discuss and
respond to the research questions previously posed.
4. Discussion and conclusion
In the context of OER, interoperability refers to the capability of a
resource to facilitate the exchange and use of information, allowing
TABLE7 Categories and frequencies in the qualitative analysis of the
feedback received from evaluators.
N of comments Frequency (%)
Utility 102 13.2
Contents 96 12.4
Learning 92 11.9
Activities 90 11.6
Temporalization 74 9.6
Assessment and feedback 62 8.0
Diculty 62 8.0
Design 60 7.8
Objectives 46 5.9
Motivation and attention 40 5.2
Interactivity 34 4.4
Accessibility 8 1.0
Context 8 1.0
Professional
development
0 0.0
TABLE8 Mean values and corresponding standard deviations (SD)
obtained for each category in the Achieve instrument.
Mean value SD N
Total 2.75 0.29 101
OBJ (objectives) 2.87 0.37 101
QCO (contents) 2.84 0.39 101
UTI (utility) 2.73 0.51 101
QEV (evaluation) 2.76 0.43 101
QTI (interactivity) 2.55 0.59 101
QEX (exercises) 2.71 0.51 101
DML (learning) 2.77 0.42 101
ACC (accessibility) 2.54 0.57 56
Romero-Ariza et al. 10.3389/feduc.2023.1082577
Frontiers in Education 08 frontiersin.org
reusability and adaptation by others. On the one hand, for accessibility,
the results show a mean value of 2.55 out of 3 in the Achieve
instrument (lower than the other items) and the highest amount of
missing data (only 54 out of 101 evaluations), with few comments
from the evaluators (frequency 1%). e missing data and low
frequency of comments suggest that accessibility is an aspect somehow
neglected by evaluators. On the other hand, perceived utility obtains
54.9% of negative comments referring to technical aspects and to the
lack of appropriate educational metadata describing the educational
level and age group. According to some authors (Santos-Hermosa
etal., 2017), this is a key aspect to consider when discussing quality
issues since OER metadata are used to assess the educational relevance
of the OER and the repository to which it belongs, as they determine
its likeliness to beused.
Another issue aecting the utility of OER is related to
temporalization. Item Q17in HEODAR refers to the estimated time for
the use/implementation of the OER. is item received a mean value of
3.69 (sensibly lower than the remaining items). is information is in
line with the data obtained through the qualitative study, where
temporalization frequently commented (9.56%) and most of the
comments were negative (91.9%); for example, “the number of sessions
should beconsiderably incremented” or “it dedicates too much time to
some particular activities.” Since time and temporalization are critical
issues highlighted by experts because they might signicantly aect the
usability of resources, wesuggest that carrying about granularity in OER
will enhance OER exibility. Time optimization through the articulation
of digital resources in minimum meaningful pieces may be used
independently and combined with others to increase OER versatility and
thus usability (Ariza and Quesada, 2011).
Regarding the pedagogical and didactic design of OER, Van
Assche (2007) dened interoperability as the ability of two systems
to interoperate. Experts in didactics claim that this term should
beinterpreted in a wider sense, also including semantic, pragmatic,
and social interoperability related to the educational systems where
OER operate. Semantic interoperability refers to the way information
is given and interpreted, while pragmatic and social interoperability
address, among others, the appropriateness and relevance of the
pedagogical goals, their content quality, and the perceived utility of
the resources (Ariza and Quesada, 2011). From this perspective,
wecan conclude that there are aspects related to the pedagogical and
didactic dimensions that somehow inuence the semantic, pragmatic,
and social interoperability of the OER under evaluation. Wewill
further discuss them based on the results obtained.
e quantitative results reveal that the aspect best evaluated was the
learning objectives (4.47 out of 5in HEODAR and 2.87 out of 3in the
Achieve instrument), considering whether they were aligned with the
OER content and the learning activities. ese were explicitly evaluated
(2.71 out of 3), showing one of the highest frequencies, with a balance
FIGURE2
Frequency (%) of positive (solid) vs. negative (lined) feedback.
FIGURE3
Frequency (%) of comments about positive and the to beimproved aspects in the EDIA repository.
Romero-Ariza et al. 10.3389/feduc.2023.1082577
Frontiers in Education 09 frontiersin.org
between positive and negative feedback. Regarding aspects to
beimproved, evaluators refer to the nature of the activities, either too
repetitive, too complex, or diverse, or not aligned with objectives. On the
contrary, some of the positive comments referred to an adequate number
and sequence of activities or the capacity to address dierent
learning styles.
Regarding the quality of OER contents and their impact on the
social interoperability of the resources, content quality is the second
one receiving more comments from the evaluation (12.4%), with
61.3% of positive comments. Evaluators referred to the clarity,
appropriateness, comprehension, level of detail of the information
provided, the presence or not of complementary information to gain
a complete and deep understanding of the topic to teach, the reliability
of the references and sources of information provided or the OER
adequacy to the educational target group. In addition, the quantitative
data show a very positive evaluation of the quality of the OER content.
is was reected in the high scores received from both instruments
(4.24 out of 5 and 2.84 out of 3).
Under the dimension interactivity, the evaluation instruments
include items determining the students’ roles in the learning process
according to the kind of activities, learning scenarios and
methodological approaches used. is dimension shows mean values
per subscale of 4.09 out of 5in HEODAR and 2.55 out of 3in the
Achieve instruments, with a prevalence of comments for improvement
in the qualitative analysis.
Sometimes, interactivity is related to opportunities for formative
assessment. Evaluation is explicitly considered in the ACHIEVE
instrument, obtaining a mean value of 2.76; this reveals that its quality
is closer to level 3 (superior) than to level 2 (limited). In addition, this
is the case for Item Q31in HEODAR, which has received a mean value
of 4.01 out of 5. is item refers to the feedback given to students as a
formative assessment. Looking at the comments received from
experts, wend references to assessment in 8.01% of the cases; from
them, 41.9% referred to aspects that could beimproved: “there are no
explicit indications for proper feedback aer doing each activity/ task,
as well as the nal project” or “self-evaluation resources should
beincluded.” ese results are useful for guiding the improvement of
evaluation in the EDIA resources analyzed so far.
As described in the previous section, the psycho-pedagogical
dimension refers to the type of learning taking place, the level of
diculty, students’ motivation and attention and the learning context. In
the qualitative analysis, 11.9% of experts’ comments were coded under
the category “learning” being most of them positive (86.9%), highly
appreciating the opportunities to conduct collaborative work, to promote
autonomous learning and to develop key competences while achieving
transversal learning outcomes in a meaningful way. Items Q21 and
Q22in the HEODAR instrument explicitly evaluated enhanced learning,
showing mean values of 3.93 and 3.98 out of 5, respectively, which,
though showing good quality, are within the lowest mean values.
e overall evaluation of the psycho-pedagogical aspects in the
HEODAR and the Achieve instruments shows very high quality, even
leaving little room for improvement. e open feedback received from
experts shows more positive comments than negative ones. It is within
the category “diculty,” where wend more comments pointing out
aspects that may beimproved, with these being attributable to specic
OERs for which either the elicitation mechanism and use of students
previous knowledge, the language employed, the cognitive demand or
the progression of the learning sequence are not considered
appropriate for the target students.
Finally, it should benoted that if wehad limited this study to the
quantitative data, it would not have been possible to conclude anything
else but the high quality of the EDIA repository, with the lowest mean
value being 3.93 out of 5in HEODAR for item Q21 about interactivity
and the lowest mean value of 2.54 out of 3in the Achieve instrument
for OER accessibility. However, the qualitative data provide a rich,
detailed picture, which allows us to better describe the characteristics
of the EDIA OER and to identify which aspects might still beimproved
and how to take a step forward toward excellence.
4.1. Final remarks
is work responds to one of the challenges or deciencies
associated with the adoption of OER-based education pointed out by
UNESCO: the lack of clear mechanisms for evaluating the quality of
repositories (lack of any clear quality assurance mechanisms), which
has resulted in unclear standards and poor quality of distance
education (UNESCO, 2010).
e use of a mixed-method approach involving both quantitative
and qualitative methods combine the aordances of dierent techniques
and compensates for their limitations, allowing us to triangulate the
information obtained from dierent sources, which strengthens the
consistency and reliability of the results and provides a richer perspective.
In this sense, the qualitative analysis of the constructive feedback received
from evaluators oered an in-depth view about the general quality of the
resources that widened the perspective reached solely by the application
of two previously validated quantitative instruments. Indeed, although
the results from the quantitative approach show high mean values in all
the quality dimensions evaluated, the qualitative approach serve us to:
(1) triangulate and veried results and (2) obtain comprehensive
information about which aspects should beimproved, why and how. All
results allow us to develop a better understanding of how OERs are
perceived by a broad group of key stakeholders and how they can
beimproved to move a step forward in unveiling OERs’ whole potential
to provide inclusive, high-quality education.
Considering what has been previously presented and discussed,
wecan draw several conclusions in relation to the characteristics of
the EDIA repository:
e interoperability of EDIA OERs, though quite good, may
beimproved by enhancing the accessibility and usability of resources.
Indeed, the dimensions related to interactivity and accessibility are
those receiving the lowest scores on the quantitative scales and
signicant comments for improvement in the open feedback from
evaluators. Usability is a general term that depends on a wide range of
aspects, including not only accessibility and technical interoperability
but also the associated metadata and OER granularity. Time seems to
bea controversial issue in the lesson planning associated with several
OERs since evaluators oen refer to either over or under estimation
of the time necessary to successfully implement a particular OER in
the classroom. Weconsider that one way to address time issues would
beto increase granularity in EDIA OERs.
On the other hand, usability depends on the perceived quality of
educational resources. e study carried out shows that the
dimensions better evaluated are those related to the quality of
objectives, content and the type of learning fostered by the evaluated
OER, as can beseen by the mean values achieved in the quantitative
instruments and the frequency of the positive comments received in
relation to the categories identied through the qualitative analysis.
Romero-Ariza et al. 10.3389/feduc.2023.1082577
Frontiers in Education 10 frontiersin.org
In relation to psycho-pedagogical aspects, quantitative and
qualitative data show that EDIA OERs promote students’ engagement,
motivation and the acquisition of key abilities and deep learning.
Wend dierences in the feedback received from evaluators depending
on the school subject. On a general level and with some exceptions,
comments for improvement are mainly referred to psycho-pedagogical
aspects (motivation, attention, interactivity, and creativity) in OER for
social sciences subjects, while science-related OERs might beimproved
in issues related to the didactic domain (objectives, content and
activities). In line with observation, there is a decit in science related
OERs in the study sample. Given the reduced number of resources and
the poorer evaluation received, there is a clear need to increase
designing eorts and stimulate the development and exchange of open
materials under the philosophy of OER in this eld. It will enhance the
co-creation and continuous improvement of science teaching and
learning at the national or international level.
Finally, this work contributes to increasing the available research
evidence necessary to inform future steps and to optimize public
eorts aimed at making the most of the OER paradigm to enhance
learning and promote inclusive and equitable quality education.
Nevertheless, it must betaken into account the limitation of our work,
mainly in relation to the conclusions derived for some disciplines for
which the sample size was narrow due to the own constraints of the
EDIA repository. Anyway, this aspect has, indeed, been pointed as an
improvement aspect for the EDIA project, which is now getting
improved also in this line.
In relation to future lines of work, we are extending the
collaboration between the Spanish Ministry of Education and our
research group to uptake the study of the impact of EDIA OERs on
students’ motivation and learning and to develop a better
understanding of the key issues related to teachers’ engagement in the
design, adaptation, and implementation of OER.
Data availability statement
e original contributions presented in the study are included in
the article/Supplementary material, further inquiries can bedirected
to the corresponding author.
Author contributions
MR was responsible for the conceptualization of the manuscript,
the project administration and supervision, the investigation, writing
the original dra, and review and editing the nal version. AA was
responsible for the conceptualization of the manuscript, the
visualization, the research methodology, investigation, soware use,
the formal analysis, and review and editing the nal version. AQ was
responsible for the visualization, the research methodology,
investigation, soware use, the data curation, the validation of the
instruments, the formal analysis, writing part of the original dra, and
review and editing the nal version. PR was responsible for the
visualization, the research methodology, investigation, soware use,
the formal analysis, writing the original dra, and review and editing
the nal version. All authors contributed to the article and approved
the submitted version.
Acknowledgments
e authors acknowledge funding from the Spanish National
Centre for Curriculum Development through Non-Proprietary
Systems (CEDEC), dependent on the Spanish Ministry of Education,
through the transference contract project with the University of Jaen
with reference 3858.
Conflict of interest
e authors declare that the research was conducted in the
absence of any commercial or nancial relationships that could
beconstrued as a potential conict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors
and do not necessarily represent those of their aliated
organizations, or those of the publisher, the editors and the
reviewers. Any product that may be evaluated in this article, or
claim that may be made by its manufacturer, is not guaranteed or
endorsed by the publisher.
Supplementary material
e Supplementary material for this article can befound online
at: https://www.frontiersin.org/articles/10.3389/feduc.2023.1082577/
full#supplementary-material
References
Almendro, D., and Silveira, I. F. (2018). Quality assurance for open educational
resources: the OER trust framework. Int. J. Learn. Teach. Educ. Res. 17, 1–14. doi:
10.26803/ijlter.17.3.1
Ariza, M. R., and Quesada, A. (2011, 2011). “Interoperability: standards for learning
objects in science education” in Handbook of research on E-learning standards and
interoperability: frameworks and issues. eds. E. F. Lazarinis and S. G. Y. E. Pearson (IGI
Global: Hershey, NY), 300–320.
Baas, M., Admiraal, W., and van den Berg, E. (2019). Teachers’ adoption of open educational
resources in higher education. J. Interact. Media Educ. 2019:9. doi: 10.5334/jime.510
Baas, M., van der Rijst, R., Huizinga, T., van den Berg, E., and Admiraal, W. (2022).
Would youuse them? A qualitative study on teachers' assessments of open educational
resources in higher education. Internet High. Educ. 54:100857. doi: 10.1016/j.
iheduc.2022.100857
Birch, R.. (2011). Synthesized from eight rubrics developed by ACHIEVE. Available at:
www.achieve.org/oer-rubrics.
Bodily, R., Nyland, R., and Wiley, D. (2017). e RISE framework: using learning
analytics to automatically identify open educational resources for continuous
improvement. Int. Rev. Res. Open Distrib. Learn. 18, 103–122. doi: 10.19173/irrodl.
v18i2.2952
Cechinel, C., Sánchez-Alonso, S., and García-Barriocanal, E. (2011). Statistical proles
of highly-rated learning objects. Comput. Educ. 57, 1255–1269. doi: 10.1016/j.
compedu.2011.01.012
Romero-Ariza et al. 10.3389/feduc.2023.1082577
Frontiers in Education 11 frontiersin.org
Kerres, M., and Heinen, R. (2015). Open informational ecosystems: the missing link
for sharing educational resources. Int. Rev. Res. Open Dist. Learn. 16, 24–39. doi:
10.19173/irrodl.v16i1.2008
Mncube, L. S., and Mthethwa, L. C. (2022). Potential ethical problems in the creation
of open educational resources through virtual spaces in academia, e09623. Heliyon 8.
doi: 10.1016/j.heliyon.2022.e09623
Mohamed Hashim, M. A., Tlemsani, I., and Duncan, M. R. (2022). A sustainable
university: digital transformation and beyond. Educ. Inf. Technol. 27, 8961–8996A. doi:
10.1007/s10639-022-10968-y
Nascimbeni, F., and Burgos, D. (2016). In search for the open educator: proposal of a
denition and a framework to increase openness adoption among university educators. Int.
Rev. Res. Open Distrib. Learn. 17, 1–17. doi: 10.19173/irrodl.v17i6.2736
Open Educational Resources Evaluation Tool Handbook (2012). Available at: https://
www.achieve.org/les/AchieveOEREvaluationToolHandbookFINAL.pdf.
Orozco, C., and Morales, E. M. (2016). Psychometric testing for HEODAR tool. TEEM
'16: Proceedings of the Fourth International Conference on Technological Ecosystems
for Enhancing Multiculturality, pp.163–170.
Pardo, A., Ellis, R., and Calvo, R. A. (2015) Combining observational and experiential
data to inform the redesign of learning activities. LAK’15 Proceedings of the Fih
International Conference on Learning Analytics and Knowledge, pp. 305–309.
Pattier, D., and Reyero, D. (2022). Contributions from the theory of education to the
investigation of the relationships between cognition and digital technology. Educ. XX1
25, 223–241. doi: 10.5944/educxx1.31950
Santos-Hermosa, G., Ferran-Ferrer, N., and Abadal, E. (2017). Repositories of open
educational resources: an assessment of reuse and educational aspects. Int. Rev. Res.
Open Dist. Learn. 18, 84–120. doi: 10.19173/irrodl.v18i5.3063
Stein, M. S., Cechinel, C., and Ramos, V. F. C. (2023). Quantitative analysis of users
agreement on open educational resources quality inside repositories. Rev. Iberoam.
Tecnol. Aprend. 18, 2–9. doi: 10.1109/RITA.2023.3250446
UNESCO. (2010). Global trends in the development and use of open educational
resources to reform educational practices.
UNESCO. (2019). Recommendations on open educational resources. Paris: UNESCO.
Van Assche, F. (2007). Roadmap to interoperability for education in Europe. Available
at: http://insight.eun.org/shared/data/pdf/life_book.pdf
VERBI Soware. (2020). MAXQDA 2020 [computer soware]. Berlin, Germany:
VERBI Soware. Available from maxqda.com.
Wiley, D., and Green, C. (2012). “Why openness in education” in Game changers:
education and information technologies, 81–89. Available at: https://library.educause.edu/
resources/2012/5/chapter-6-why-openness-in-education
Yuan, M., and Recker, M. (2015). Not all rubrics are equal: a review of rubrics for
evaluating the quality of open educational resources. Int. Rev. Res. Open Distrib. Learn.
16, 16–38. doi: 10.19173/irrodl.v16i5.2389
Zancanaro, A., Todesco, J. L., and Ramos, F. (2015). A bibliometric mapping of open
educational resources. Int. Rev. Res. Open Distrib. Learn. 16, 1–23. doi: 10.19173/irrodl.
v16i1.1960