Part 2 of our workshop outcomes covered open research questions (part 1 looked at pros and cons of the just in time approach).

Open Research Questions

Based on the papers presented and ongoing discussions, the participants derived the following list:
  1. If what’s important won’t be missed ⇒ What’s missed is not important. Does this relationship hold?
  2. How do we do resource allocation (e.g., budget, $) in the face of JIT Reqs.
  3. What’s a “small” (how much; how much is “big”) initial investment? (Principle #2)
  4. Tradeoff between adaptability and building something that you don’t really need (YAGNI -- you aren’t gonna need it)
  5. How can you tell something is important without refining it? (e.g., slicing architecturally significant requirements; bucket architecture into 2-3 week period)
  6. Don’t know the unknowns ([quality/NFRs]). Would a reqs taxonomy be useful here?
  7. Can you do JIT RE on NFRs (e.g., security)?
  8. Is traceability different in JITRE? Traceability assumes one has complete artifacts for both requirements and (code/design/arch) to trace between.
  9. How consistently are the labels like “enhancement”, “major improvement”, “new feature”, “user story” used? (cultural variances, agile versus OSS). Can we rely on these for research data?
  10. Metrics are rich for coding, testing, but not for reqs/RE. How significant is the lack of RE metrics?
  11. Spikes as a way of advancing (experimenting) and deriving or eliciting new requirements.
    1. “fail early, fail often” branching/forking
  12. Relationships between concepts at an abstract level: Agile, time-constrained, just-in-time RE?
    1. e.g., agile is not necessarily time-constrained