Alghanemi, J., & Al Mubarak, M. (2022). The role of artificial intelligence in knowledge management. In Studies in Computational Intelligence. Future of Organizations and Work after the 4th Industrial Revolution (pp. 359–373). 10.1007/978-3-030-99000-8_20.
Amayuelas, A., Wong, K., Pan, L., Chen, W., & Wang, W. (2023). Knowledge of knowledge: Exploring known-unknowns uncertainty with large language models. ArXiv preprint arXiv:2305.13712.
Bian, N., Liu, P., Han, X., Lin, H., Lu, Y., He, B., & Sun, L. (2023). A drop of ink may make a million think: The spread of false information in large language models. ArXiv preprint arXiv:2305.04812.
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., … Amodei, D. (2020). Language Models are Few-Shot Learners. Retrieved from
http://arxiv.org/abs/2005.14165.
Farquhar, S., Kossen, J., Kuhn, L., & Gal, Y. (2024). Detecting hallucinations in large language models using semantic entropy. Nature, 630(8017), 625–630. 10.1038/s41586-024-07421-0.
Feldman, P., Foulds, J. R., & Pan, S. (2023). Trapping LLM hallucinations using tagged context prompts. Retrieved from
http://arxiv.org/abs/2306.06085.
Feuerriegel, S., Hartmann, J., Janiesch, C., & Zschech, P. (2024). Generative AI. Business & Information Systems Engineering, 66(1), 111-126.
Gunjal, A., Yin, J., & Bas, E. (2023). Detecting and preventing hallucinations in Large Vision Language Models. Retrieved from
http://arxiv.org/abs/2308.06394.
Kanaani, M., Dadkhah, S., & Ghorbani, A. A. (2024, May). Triple-R: Automatic Reasoning for Fact Verification Using Language Models. In N. Calzolari, M.-Y. Kan, V. Hoste, A. Lenci, S. Sakti, & N. Xue (Eds.),
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) (pp. 16831–16840). Retrieved from
https://aclanthology.org/2024.lrec-main.1463.
Kang, H., Ni, J., & Yao, H. (2023). Ever: Mitigating hallucination in large Language Models through real-time Verification and rectification. Retrieved from
http://arxiv.org/abs/2311.09114.
Khamassi, M., Nahon, M., & Chatila, R. (2024). Strong and weak alignment of large language models with human values. Sci Rep, 14, 19399. https://doi.org/10.1038/s41598-024-70031-3.
Köpf, A., Kilcher, Y., von Rütte, D., Anagnostidis, S., Tam, Z.-R., Stevens, K., … Mattick, A. (2024). OpenAssistant conversations - democratizing large language model alignment. Article 2064. Presented at the Proceedings of the 37th International Conference on Neural Information Processing Systems, New Orleans, LA, USA. Curran Associates Inc.
Li, M., Peng, B., Galley, M., Gao, J., & Zhang, Z. (2023). Self-Checker: Plug-and-play modules for fact-checking with large language models. Retrieved from
http://arxiv.org/abs/2305.14623.
Ma, J., Dai, D., Sha, L., & Sui, Z. (2024). Large language models are unconscious of unreasonability in math problems. ArXiv preprint arXiv:2403.19346.
Madaan, A., Tandon, N., Gupta, P., Hallinan, S., Gao, L., Wiegreffe, S., … Clark, P. (2023). Self-refine: Iterative refinement with Self-feedback. 10.48550/ARXIV.2303.17651.
Mardiansyah, K., & Surya, W. (2024). Comparative analysis of ChatGPT-4 and Google Gemini for spam detection on the SpamAssassin public mail corpus. 10.21203/rs.3.rs-4005702/v1.
Mayahi, S., & Vidrih, M. (2022). The impact of generative AI on the future of visual content marketing. 10.48550/ARXIV.2211.12660.
Molina, M. G., & Chicaíza, L. (2011). A Guide to Sources for Research in Economic Sciences.
Social Science Research Network.
https://doi.org/10.2139/SSRN.1766062.
Mündler, N., He, J., Jenko, S., & Vechev, M. (2023). Self-contradictory hallucinations of large language models: Evaluation, detection and mitigation. Retrieved from
http://arxiv.org/abs/2305.15852.
Peng, B., Galley, M., He, P., Cheng, H., Xie, Y., Hu, Y., … Gao, J. (2023). Check your facts and try again: Improving large language models with external knowledge and automated feedback. Retrieved from
http://arxiv.org/abs/2302.12813.
Qiu, Y., Ziser, Y., Korhonen, A., Ponti, E. M., & Cohen, S. B. (2023). Detecting and mitigating hallucinations in multilingual summarisation. Retrieved from
http://arxiv.org/abs/2305.13632.
Ramnarayan, S. (2021). Marketing and artificial intelligence. In Advances in Marketing, Customer Relationship Management, and E-Services (pp. 75–95). 10.4018/978-1-7998-5077-9.ch005.
Ray, P. P. (2023). Chatgpt: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 3, 121–154.
Saxena, S., Prasad, S., Prakash, M. V., Shankar, A., Vaddina, V., & Gopalakrishnan, S. (2023). Minimizing factual inconsistency and hallucination in large language models. arXiv preprint arXiv:2311.13878.
Schulhoff, S., Ilie, M., Balepur, N., Kahadze, K., Liu, A., Si, C., Li, Y., Gupta, A., Han, H., & Schulhoff, S. (n.d.), (2024). The Prompt Report A Systematic Survey of Prompting Techniques. https://arxiv.org/abs/2406.06608.
Si, C., Gan, Z., Yang, Z., Wang, S., Wang, J., Boyd-Graber, J., & Wang, L. (2022). Prompting GPT-3 to be reliable. Retrieved from
http://arxiv.org/abs/2210.09150.
Teimuraz Goderdzishvili, T. G. (2023). Artificial intelligence and creative thinking, the future of idea generation. Economics, 105(3–4), 63–73. 10.36962/ecs105/3-4/2023-63.
Tu, C.-H., Hsu, H.-J., & Chen, S.-W. (2024). Reinforcement learning for optimized information retrieval in LLaMA. 10.21203/rs.3.rs-3847100/v1.
Vima, C., Bosch, H., & Harringstone, J. (2024). Enhancing inference efficiency and accuracy in large language models through next-phrase prediction. 10.21203/rs.3.rs-4864441/v1.
White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., … Schmidt, D. C. (2023). A prompt pattern catalog to enhance prompt engineering with ChatGPT. Retrieved from
http://arxiv.org/abs/2302.11382.
Yan, Y., Zheng, P., & Wang, Y. (2024). Enhancing large language model capabilities for rumor detection with Knowledge-Powered Prompting. Engineering Applications of Artificial Intelligence, 133(108259), 108259. 10.1016/j.engappai.2024.108259.
Yin, Z. (2024). A review of methods for alleviating hallucination issues in large language models. Applied and Computational Engineering, 76, 258-266.
Yu, L., & Lai, Z. (2021). The management of internet content products in the era of artificial intelligence. Journal of Physics. Conference Series, 1757(1), 012017. 10.1088/1742-6596/1757/1/012017.
Yuji, Zhang., Sha, Li., Jiateng, Liu., Pengfei, Yu., Yi, R., Fung., Jing, Li., Manling, Li., Heng, Ji. (2024). Knowledge Overshadowing Causes Amalgamated Hallucination in Large Language Models. Retrieved from
arXiv:2407.08039v1.
https://doi.org/10.48550/arXiv.2407.08039.