U19967 Artificial Intelligence Level 6 Portfolio Assessment 2025/26 | CCCU ASSESSMENT REQUIREMENTS
U19967 Artificial Intelligence Level 6 Portfolio Assessment 2025/26 | CCCU
ASSESSMENT REQUIREMENTS
|
Course Name |
Computer Science, Data Computing Intelligence |
||
|
Module Title |
Artificial Intelligence |
||
|
Module Code |
U19967 |
||
|
Module Start Date / Cohort |
September 2025 |
||
|
Module Level |
6 |
Assessment Type(s) |
Portfolio |
|
Word Length / Duration |
No actual word count |
% weighting |
50 % |
|
Deadline (date & time) for Submission |
21/11/2025 at 2pm |
Format/Location of submission |
Electronic copy via Turnitin + video recording/ Blackboard |
Assessment Feedback
Feedback to support the work will be given in the classes
Formal feedback will be within 15 working days after the submission deadline.
Detailed Assessment Guidance
Activity 1:
In predicate logic, there are many different rules of inference. Being universally true, they may be used either to validate complete arguments or to generate conclusions. Moreover, individual rules of inference may either be used on their own or may be applied in conjunction with others.
Given the following list of basic inference rules:
1.Modus ponendo ponens (MPP): A B, A |– B
(e.g. IF my program is correct THEN it will run, my program is correct, THEREFORE it will run)
2.Modus tollendo tolens (MTT): AB, ~ B |– ~A
(e.g. IF my program is correct THEN it will run, my program will NOT run, THEREFORE it is not correct)
3.Double negation (DN): A |– ~(~A)
(e.g. My program has run THEREFORE My program has not not run)
4.&introduction (&INT) A,B |–(A & B)
(e.g. My program has run; it is correct THEREFOR my program has run AND is correct)
5.Reductio ad absurdum (RAA): AB, A~B |– ~A
(e.g. IF my program is correct THEN it will run, IF my program is correct THEN it will NOT run THEREFORE My program is not correct)
6.Universal specialisation (US): Ɐ(X) W(X), A |– W(A)
(e.g. All things which are computers are unreliable, a ‘TIPTOP’ is a computer THEREFORE a
‘TIPTOP’ is unreliable).
A.Given the rule:
‘ IF artificial intelligence is a growing subject THEN there is no shortage of applicants’ and the fact ‘there is a shortage of applicants’
- Can you use the list of inference rules above to prove that ‘artificial intelligence is not a growing subject’.
Explain step by step your reasoning, the rules used and provide a summary of the formal structure of your proof. - Is there an alternative way to prove the same fact ‘artificial intelligence is not a growing subject’? Explain step by step your reasoning, the rules used and provide a summary of the formal structure of your proof.
Activity 2:
The following is the rule set of a simple expert system for diagnosing plant issues:
- IF leaves are yellow AND soil is dry → THEN watering_status = underwatering
- IF leaves are yellow AND soil is wet → THEN watering_status = overwatering
- IF watering_status = underwatering → THEN problem = plant is dehydrated
- IF watering_status = overwatering → THEN problem = root rot risk
- IF problem = plant is dehydrated AND temperature is high → THEN diagnosis = water more frequently
- .IF problem = root rot risk AND drainage = poor → THEN diagnosis = improve soil drainage
A.Use forward chaining to reason about the weather if the working memory contains the facts: leaves are yellow, soil is dry, temperature is high. Show your answer in a table naming the rules matching the current working memory (conflict set), which rule you apply, and how the working memory contents changes on the next cycle after a rule has fired.
| Cycle | Working Memory | Conflict Set | Rule fired |
B.Use backward chaining to reason about the plant issues diagnosis if the working memory contains the fact: problem = root rot risk. Show your answer in a similar table.
C.Provide your own example demonstrating backward chaining to reason about the plant issue diagnosis. Show your answer in a similar table.
D.Suppose that the user interface of our expert system allows the system to ask a user about the facts whether they are true or false. What question (or questions) should the system ask the user in order to conclude that the diagnosis is improve the soil drainage? What will the user answer? Which rule will require clarification from the user?
Activity 3:
Can you design then implement a mini medical diagnosis expert system in prolog:
1.Define a set of symptoms and corresponding facts that represent medical conditions: Example:
Fact: “Fever above 38°C”
Fact: “Persistent cough for more than two weeks” Fact: “Severe headache accompanied by nausea” Fact: “Shortness of breath”
Fact: “Joint pain and swelling”
2.Create rules based on medical knowledge to infer possible diagnoses from the symptoms provided. For example:
Rule: If the patient has a fever above 38°C and persistent cough for more than two weeks, consider tuberculosis.
Rule: If the patient has shortness of breath and wheezing, consider asthma.
Rule: If the patient has severe headache accompanied by nausea and vomiting, consider migraine or meningitis.
Activity 4:
Let a Finite State Machine (FSM) A be defined by A = (Q, Σ, q0, δ, F) with: Q
= {0, 1, 2, 3} Σ = {a, b} q0 = 0 F = {3}
And the transition function δ:
|
q |
t |
q’ |
|
0 |
a |
1 |
|
0 |
a |
2 |
|
0 |
b |
2 |
|
Q |
t |
q’ |
|
1 |
a |
3 |
|
2 |
b |
2 |
|
2 |
b |
3 |
- Q = set of states
- Σ = input alphabet
- q0 = initial state
- δ = transition function
- F = set of accepting (final) states.
1.Draw this FSM (hand-drawn).
2.Give the shortest word recognized by the automate.
3.Give an example of a word non recognized by the FSM.
B.Design a finite state machine (FSM) for a non-player character (NPC) that can navigate a maze. Provide the formal definition of your FSM, including:
- The set of states
- Input alphabet
- The initial state
- The transition function
- The set of final (goal) states
Provide the diagram to illustrate your FSM (hand-drawn).
Activity 5:
Choose a local, real-world domain that you are personally familiar with – such as:
- A traditional local market or trade system
- A student club or campus activity
- A local transportation system.
- A family-owned business or digital service in your area.
Task:
1.Identify the domain and describe it in 3–5 sentences.
2.Design a simple ontology for this domain, including at minimum:
- 4–6 classes
- At least 2 subclass relationships
- At least 2 object properties
- At least 2 data properties
- 2–3 individuals (instances)
3.Design the ontology as a diagram (using Protege).
4.Explain your modeling choices, especially:
- Why you chose these classes and properties
- How the ontology supports reasoning or querying in that domain (provide examples using SWRL or SQWRL or another ontology querying language of your choice)
- Any assumptions or challenges in modeling.
Instructions for submission:
You are expected to submit two elements:
- Portfolio submission: a document in word or pdf that contains the detailed solution for the activities including the necessary explanation for each question, a photo of the hand-drawn solution where necessary.
- Video Submission: Record and upload a short video (less than 15 minutes) to the ‘Video Recording
Submission’ bucket on Blackboard, covering the following activities:
-Activity 2: Demonstrate, using cards and paper, how forward chaining and backward chaining work in the diagnostic system for plants. How would you do it to explain your answers to a 6-year-old child?
-Activity 3: Demonstrate your medical diagnosis system on SWI-Prolog with examples of queries. Use the ‘?- trace’ function to show how facts and rules are evaluated to reach the results. Explain the reasoning steps clearly.
– Activity 5: Present your ontology implemented in Protégé and discuss your modeling choices as well as the execution of some queries.
Marking Criteria
In the appendix pages 5 and 6.
Further Information
-In class recordings.
-Use the last 30 minutes of the practical sessions to seek feedback from the module leader.
|
Activity |
Excellent 100-80 |
Very Good 79-70 |
Good 69-60 |
Sound 59-50 |
Satisfactory 49-40 |
Fail 39-0 |
|
Activity 1 (20%) |
Demonstrates a comprehensive understanding of predicate logic and inference rules. Correctly formalises all statements, applies the appropriate rule accurately and provides a valid alternative proof. Logical steps are clearly presented, rule names are identified, and reasoning is fully justified. The formal summary and plain-English explanation are precise, coherent, and well- structured. |
Shows strong understanding with accurate use of inference rules and mostly correct formalisation. Minor errors in notation, explanation, or structure, but overall reasoning is valid. Both proofs are largely correct and clearly expressed. |
Displays a reasonable understanding of inference principles. Main proof is mostly correct but may contain small logical or structural errors. Alternative proof is attempted but incomplete or partially justified. Presentation and explanation are clear but not comprehensive. |
Demonstrates basic understanding of inference. Some correct steps identified, but reasoning is partially flawed or lacks clarity. May confuse rules or omit necessary justification. Limited explanation and structure. |
Minimal understanding shown. Attempt made to formalise or apply rules, but with major gaps or incorrect logic. Little or no coherent explanation. |
No valid formalisation or logical reasoning. Misapplication of rules, missing proofs, or incoherent response. Fails to demonstrate understanding of predicate logic or inference processes. |
|
Activity 2 (20%) |
Demonstrates an in-depth understanding of forward and backward chaining. All reasoning steps are logically sound, clearly presented, and complete. Effectively identifies user questions and links them correctly to rules. Work is highly structured, with no or minimal errors. |
Shows a strong understanding of expert system reasoning. Minor errors in chaining steps or conflict set identification may exist but do not affect the logic. User interaction analysis is relevant and mostly accurate. Work is clear and mostly well- organized. |
Shows a good understanding of chaining techniques with mostly correct logic. Some conflict sets or rule firings may be missing or misidentified. User interaction question may lack clarity but is on the right track. |
Demonstrates basic understanding with several logical or structural issues. Forward/backward chaining may have errors or missing steps. User question is vague or partially incorrect. Work meets basic requirements. |
Limited understanding of expert system reasoning. Significant errors or omissions in chaining steps. User interaction component is incorrect or unclear. Work lacks structure or clarity |
Fails to demonstrate understanding of forward/backward chaining or expert system logic. Steps are incorrect, incomplete, or missing. User question is irrelevant or missing. Work does not meet the minimum standard. |
|
Activity 3 (20%) |
Demonstrates an expert- level understanding of expert systems and Prolog. Accurately defines appropriate symptoms and conditions. Rules are medically sound, logically valid, and well-structured in Prolog. System runs without errors and provides correct diagnoses. Code is clean, modular, and well-commented. |
Strong implementation with mostly accurate symptoms and diagnoses. Rules show solid understanding of logic and are implemented correctly. Minor syntax or logic issues may exist but do not affect system functionality significantly. Well- |
Defines a reasonable set of symptoms and conditions. Rules generally make sense and mostly work in Prolog. Some issues in logic or structure may lead to minor inaccuracies in diagnosis. Code may have minor errors or lack clarity but shows competence. organized code with minimal errors. |
Basic but functional Prolog implementation. Symptoms and rules are present but may be overly simplistic, incomplete, or not fully aligned with medical logic. System runs but with noticeable issues. Code may be disorganized or partially incorrect. |
Minimal implementation. Symptoms and rules are vague or not well-aligned with realistic diagnoses. Several syntax or logical errors. System may not run correctly or provide valid results. Code lacks structure and clarity. |
Fails to implement a working expert system. Major flaws in Prolog syntax, logic, or understanding of expert systems. System does not compile or produces incorrect/incoherent diagnoses. Symptoms and |
|
Activity 4 (20%) |
Demonstrates a comprehensive understanding of FSMs. In Part A, all answers are correct: diagram is accurate, shortest and rejected words are correctly identified. In Part B, the FSM design is clear, logically sound, fully defined (states, transitions, alphabet, etc.), and includes a well-labeled diagram. Work shows originality and completeness. |
Shows strong understanding. Part A is mostly correct with only minor issues (e.g., minor diagram error or alternate valid word). Part B FSM is mostly correct with complete definitions and a clear diagram, though may lack some clarity or optimal design |
Understands FSM concepts. Minor errors in Part A (e.g., word recognition mistake or missing transition in diagram). Part B shows a valid FSM with appropriate structure, but might have gaps in definitions, transitions, or clarity in the diagram. |
Demonstrates basic understanding. Multiple errors in Part A, such as incorrect diagram or word analysis. Part B FSM is simplistic or incomplete, with missing states, transitions, or unclear diagram. Logic is somewhat coherent but underdeveloped. |
Limited understanding of FSMs. Part A has several inaccuracies or missing components. Part B FSM is vague, poorly defined, or lacks a usable diagram. Transitions or logic may be incorrect or arbitrary. |
Work shows little or no understanding of FSM concepts. Part A is mostly or entirely incorrect. Part B FSM is missing, nonsensical, or completely incorrect. Diagram is missing or irrelevant. |
|
Activity 5 (20%) |
Demonstrates deep understanding of ontology design and semantic modeling. The domain is clearly explained and locally grounded. Ontology includes well-chosen classes, subclasses, object and data properties, and meaningful individuals. Diagram is precise and complete. Modeling choices are clearly justified, with strong use of SWRL/SQWRL for reasoning. Assumptions and challenges are well- articulated. |
Strong work with a clear domain description. Ontology is correctly structured with appropriate elements, and the diagram is mostly accurate. Reasoning examples are relevant, though may lack depth or contain minor errors. Good justification for design choices. |