How Design Science Research Artefacts Turn Knowledge into Real-World Impact

6–9 minutes

read

When candidates come into a DBA or professional doctorate, they’re often told their work should be both rigorous and relevant. Design Science Research (DSR) is one of the few approaches that takes this seriously: it demands that you build something that works in practice and produce knowledge that advances the field.

The centre of this logic is the artefact.

In DSR, the artefact is more than a tool or prototype. It is the point where theory, prior evidence, and practitioner experience are operationalised into something that can actually change how work is done. And when we evaluate the artefact in context, we don’t just “test if it works” – we learn something new about the problem itself.

This post unpacks that double role.


1. What exactly is an artefact in Design Science Research?

In DSR, an artefact can be many things: a construct, a model, a method, or a concrete instantiation (such as a piece of software, a dashboard, or a workshop format).

Across these different forms, an artefact has three defining features:

  1. It is purposeful – created to address a specific, real-world problem.
  2. It is grounded in a knowledge base – drawing on existing theories, empirical findings, and design principles.
  3. It is open to evaluation – you can reasonably ask, “Did this make things better, and how do we know?”

So the artefact is not “just a solution”. It’s a designed hypothesis about how certain mechanisms (derived from theory + experience) can improve a problematic situation.


2. How an artefact operationalises theory and experience

2.1 From abstract knowledge to concrete design

DSR assumes that we don’t design from a blank slate. Instead, we search and combine existing knowledge – theories, frameworks, case studies, and practitioner know-how – and encode it into the artefact’s structure.

Examples:

  • A theory about decision bias becomes a set of prompts and constraints in a decision template.
  • A theory about organisational learning shapes the phases and feedback loops in a new workshop format.
  • Empirical findings on user resistance to change become specific onboarding steps built into a new IT system.

In each case, the artefact turns “knowledge in journals and people’s heads” into:

  • Fields in a form
  • Steps in a process
  • Rules in an algorithm
  • Visualisations on a dashboard
  • Roles and responsibilities in a governance model

This is what it means to operationalise knowledge: to make it executable, observable, and repeatable in practice.

2.2 Blending formal theory and experiential knowledge

DSR explicitly draws on kernel theories (established theories from reference disciplines) and local, experiential knowledge from practitioners.

In a typical project you might combine:

  • Formal theory
    • e.g., contingency theory, behavioural economics, service-dominant logic
  • Experiential insight
    • e.g., “our salespeople ignore long forms”, “middle managers are overloaded in Q4”, “customers only respond if the process is visibly simple”

The artefact weaves these together. Design decisions are justified both by:

  • “The literature suggests this mechanism should work because…”
  • “Practitioners report that this constraint is necessary in this context…”

This makes the artefact a concentrated package of both scientific and practice-based knowledge, ready to be enacted in the organisation.


3. Solving practice-relevant problems: the artefact as intervention

DSR is often described as a problem-solving paradigm: it seeks to extend human and organisational capabilities by creating innovative artefacts.

For practice-relevant problems, the artefact:

  1. Clarifies the problem
    To design something, you must pin down what “better” actually means: faster, more accurate, fairer, more transparent, less risky, etc. This sharpens the problem description itself.
  2. Constrains the solution space
    The artefact embeds specific assumptions: which users matter, which outcomes count, which trade-offs are acceptable. These assumptions are traceable and discussable.
  3. Makes change testable
    Because the artefact is concrete, we can ask before-and-after questions:
    • What changed when we introduced it?
    • For whom?
    • Under which conditions?

The result is not just a “fix” but a disciplined intervention that links a messy real-world issue to explicit design rationales.


4. Evaluation: how testing the artefact advances our knowledge of the problem

Evaluation in DSR is not an afterthought; it is central. It is where the project “earns” its contribution to knowledge.

When you evaluate an artefact, you are implicitly testing a structured claim:

If we implement artefact A (with design logic D) in context C, then outcome O will improve for stakeholders S.

Good DSR evaluation asks:

  • Does the artefact work? (utility, quality, efficacy)
  • How and why does it work (or fail)?
  • Under what conditions and limitations?

From a knowledge perspective, this does at least four things:

  1. Refines our understanding of the problem
    Evaluation often reveals that the original problem framing was incomplete or off-target. For example, a performance issue may turn out to be more about incentives than information quality.
  2. Surfaces boundary conditions
    You learn where the artefact stops working: organisation size, culture, regulatory environment, digital maturity, etc. This sharpens the “where does this apply?” part of your knowledge contribution.
  3. Generates design knowledge
    Beyond “this tool works”, you can extract design principles and mechanisms (“If you want X in context Y, you should build in feature Z because…”).
  4. Maps paths of knowledge
    Recent work on “knowledge paths” in DSR shows how knowledge evolves across cycles of building and evaluating artefacts – from very local insights to more generalised design principles.

In short, evaluation is where practice and theory meet again: practice gives us evidence, theory helps us interpret it, and the next iteration of the artefact embodies what we’ve learned.


5. Practical implications for DBA candidates and practitioner-researchers

If you’re working on a DBA or a practice-oriented doctorate using DSR, it helps to treat your artefact and its evaluation as a knowledge engine, not just project deliverables.

Here are some guiding questions you can use directly in your research design:

For your artefact

  1. Which specific theories and empirical findings are you operationalising?
    Name them. Show exactly where they appear in your design.
  2. Which experiential insights from practice shaped the artefact?
    Make practitioner input explicit, not invisible.
  3. How does each key feature of the artefact reflect a design rationale?
    Complete the sentence: “We included X because prior work or experience suggests Y mechanism is important in this problem.”

For your evaluation

  1. What are you trying to learn about the problem, not just the solution?
    Frame evaluation questions that target problem understanding, not only artefact performance.
  2. Which outcomes and mechanisms will you measure or observe?
    Think beyond KPIs to mechanisms: behaviours, interactions, decision patterns.
  3. How will you capture boundary conditions?
    Document where the artefact works less well, and what this says about the nature of the problem.
  4. What form will your knowledge contribution take?
    • Refined problem description?
    • Context-sensitive design principles?
    • A mid-range design theory?

6. Bringing it together

In Design Science Research, the artefact and its evaluation are tightly coupled:

  • The artefact is how we operationalise knowledge – formal theories and practitioner experience are turned into a structured intervention for a real problem.
  • The evaluation is how we advance knowledge – by observing what happens when that intervention meets reality, we refine our understanding of both the problem and the conditions under which particular solutions work.

For DBA candidates and practitioner-researchers, this is good news. You don’t have to choose between being “too theoretical” or “just practical”. By treating your artefact as knowledge in action and your evaluation as a disciplined way of learning from that action, you can credibly serve both worlds: your organisation’s needs and the academic requirement for a clear, defensible contribution to knowledge.

Want to Explore Opportunities?

If you recognise your own DBA or practice-oriented research in this description – a strong problem, solid experience, but a project that still feels “frozen” – that’s exactly where we can help.

NullPoint Ventures UG (haftungsbeschränkt) works with DBA candidates, practitioner-researchers, and organisations to:

  • sharpen the artefact so it truly operationalises theory and experience,
  • design evaluations that generate real insight into the problem, and
  • turn your project into a repeatable knowledge engine for your organisation.

If you’d like to explore how we could work together – from focused coaching on a single project to broader collaboration on design-science-research based initiatives – send us a short note about your context and goals:

📩 contact@nullpointventures.com

We’ll get back to you with a concrete proposal for next steps.

Sources

Drechsler, A., & Hevner, A. (2022). Knowledge paths in design science research. Foundations and Trends in Information Systems, 6(3), 171–243. https://doi.org/10.1561/2900000028

Gregor, S., & Hevner, A. R. (2013). Positioning and presenting design science research for maximum impact. MIS Quarterly, 37(2), 337–355. https://doi.org/10.25300/MISQ/2013/37.2.01

Gregor, S., & Jones, D. (2007). The anatomy of a design theory. Journal of the Association for Information Systems, 8(5), 312–335. https://doi.org/10.17705/1jais.00129

Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75–105. https://doi.org/10.2307/25148625

Peffers, K., Tuunanen, T., Rothenberger, M. A., & Chatterjee, S. (2007). A design science research methodology for information systems research. Journal of Management Information Systems, 24(3), 45–77. https://doi.org/10.2753/MIS0742-1222240302

Sein, M. K., Henfridsson, O., Purao, S., Rossi, M., & Lindgren, R. (2011). Action design research. MIS Quarterly, 35(1), 37–56. https://doi.org/10.2307/23043488

vom Brocke, J., Winter, R., Hevner, A., & Maedche, A. (2020). Special issue editorial – Accumulation and evolution of design knowledge in design science research: A journey through time and space. Journal of the Association for Information Systems, 21(3), 520–544. https://doi.org/10.17705/1jais.00611publikationen.bibliothek.kit.edu

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.