References
The following bibliography lists all literary and technical sources used in this project, formatted according to the APA 7th Edition standard.
- Bengio, Y., Hinton, G., Yao, A., Song, D., Abbeel, P., Harari, Y. N., … Russell, S. (2024). Managing AI risks in an uncertain future. Science. arXiv. https://arxiv.org/abs/2310.17688
- Center for AI Safety. (2023). Statement on AI risk. https://www.safe.ai/statement-on-ai-risk
- Greshake, K., Abdelnabi, S., Mishra, S., Endres, C., Holz, T., & Fritz, M. (2023). Not what you've signed up for: Compromising real-world LLM-integrated applications with indirect prompt injection. In Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security (AISec @ CCS 2023), 39–53.
- Meta AI. (2024). Llama 3.1 model card and technical documentation. https://llama.meta.com/docs/model-cards-and-prompt-formats/llama3_1/
- Mitchell, M. (2019). Artificial intelligence: A guide for thinking humans. Farrar, Straus and Giroux.
- Ollama. (2024). Library of open-source LLMs for local inference. https://ollama.com/library
- OWASP Foundation. (2025). LLM01: Prompt injection. https://genai.owasp.org/llmrisk/llm01-prompt-injection/
- OWASP Foundation. (2025). LLM02: Sensitive information disclosure. https://genai.owasp.org/llmrisk/llm022025-sensitive-information-disclosure/
- OWASP Foundation. (2025). LLM03: Supply chain. https://genai.owasp.org/llmrisk/llm032025-supply-chain/
- OWASP Foundation. (2025). LLM04: Data and model poisoning. https://genai.owasp.org/llmrisk/llm042025-data-and-model-poisoning/
- OWASP Foundation. (2025). LLM05: Improper output handling. https://genai.owasp.org/llmrisk/llm052025-improper-output-handling/
- OWASP Foundation. (2025). LLM06: Excessive agency. https://genai.owasp.org/llmrisk/llm062025-excessive-agency/
- OWASP Foundation. (2025). LLM07: System prompt leakage. https://genai.owasp.org/llmrisk/llm072025-system-prompt-leakage/
- OWASP Foundation. (2025). LLM08: Vector and embedding weaknesses. https://genai.owasp.org/llmrisk/llm082025-vector-and-embedding-weaknesses/
- OWASP Foundation. (2025). LLM09: Misinformation. https://genai.owasp.org/llmrisk/llm092025-misinformation/
- OWASP Foundation. (2025). LLM10: Unbounded consumption. https://genai.owasp.org/llmrisk/llm102025-unbounded-consumption/
- OWASP Foundation. (2025). OWASP Top 10 for LLM applications 2025. https://genai.owasp.org/llm-top-10/
- Zou, A., Wang, Z., Kolter, J. Z., & Fredrikson, M. (2023). Universal and transferable adversarial attacks on aligned language models. arXiv. https://arxiv.org/abs/2307.15043
- Zou, W., Fredrikson, M., Wang, Z., & Kolter, J. Z. (2024). PoisonedRAG: Knowledge corruption attacks to retrieval-augmented generation of large language models. arXiv. https://arxiv.org/abs/2402.07867