Our Computer Science forum is the place for Q&A-style discussions related to software design and schema, mathematical theory, and algorithms.
1,072 Recommended Topics
Remove FilterWhat do you think is the biggest skill gap that you generally see between junior and senior developers today?
For those of you who are self-taught developers: How did you build your logical thinking skills without a computer science degree?
I’m working on a mid-sized TypeScript project and running into a weird issue with type narrowing. Inside a callback function (specifically inside a .map()), TypeScript refuses to narrow a union type even though I’ve added all the checks. Outside the callback it works fine. Is this a known limitation, or …
I'm posting this in the computer science forum because I'm expecting answers to be along the lines of better thought processes, consideration for speed and memory management, etc. Or, maybe you guys will surprise me and you'll come up with some things that never crossed my mind. So, as the …
In my previous article, I presented a [comparison of GPT-4o and Claude 3.5 Sonnet for multi-label text classification](https://www.daniweb.com/programming/computer-science/tutorials/542629/openai-gpt-4o-vs-claude-3-5-sonnet-for-multi-label-text-classification). The accuracies achieved by both models were relatively low. Fine-tuning is one solution to overcome the low performance of large-language models. With fine-tuning, you can incorporate custom domain knowledge into an LLM's …
Which under-the-radar area of software development do you think deserves more attention? What about embedded systems? Do people still program in assembly these days?
On April 14, 2025, OpenAI released [GPT-4.1](https://openai.com/index/gpt-4-1/) — a model touted as the new state-of-the-art, outperforming GPT-4o on all major benchmarks. As always, I like to evaluate new LLMs on simple tasks like text classification and summarization to see how they compare with current leading models. In this article, I …
Should we still be teaching C and C++ to new programmers in 2025? If you could change the way programming is taught today, what would you change? This question is more geared for those recently out of school and just starting out their careers, as I'm not sure those of …
Let me explain: As a hobby project recently I've been trying to vibe-code AGI prototype. Anyhow, this of course leads to the question: what websites will be either of great help for developers or even used by an AGI in its recursive learning and improvement (excluding daniweb.com i mean...)? I …
[LangGraph](https://www.langchain.com/langgraph) is an agentic framework for orchestrating complex language model workflows as graphs of nodes and edges. Subgraphs in LangGraph are simply graphs used as nodes within a larger graph. In other words, an entire graph (with its internal nodes and logic) can be encapsulated and treated as a single …
Hello everyone, I'm a 3rd year uni student from Staffordshire. Now at the placement and looking for a Super Cool Final Year Project. My final year is starting this August and hoping to armor my self better with some research on the project. I still do not have any idea..what …
[OpenAI](https://openai.com/) and [Anthropic](https://www.anthropic.com/) are two AI giants delivering state-of-the-art large language models for various tasks. In a [previous article](https://www.daniweb.com/programming/computer-science/tutorials/542132/comparing-gpt-4o-vs-claude-3-5-sonnet-for-zero-shot-text-classification), I compared OpenAI GPT-4o and Anthropic Claude 3.5 sonnet models for text classification tasks. That article was published almost a year ago. Since then, both OpenAI and Anthropic have released state-of-the-art …
Large language models are trained on a fixed corpus, and their knowledge is often limited by the documents they are trained on. Techniques like retrieval augmented generation, continuous pre-training, and fine-tuning enhance an LLM's default knowledge. However, these techniques can still not enable an LLM to answer queries that require …
This tutorial demonstrates how to build an AI agent that queries SQLite databases using natural language. You will see how to leverage the [LangGraph framework](https://www.langchain.com/langgraph) and the [OpenAI GPT-4o](https://openai.com/index/gpt-4/) model to retrieve natural language answers from an SQLite database, given a natural language query. So, let's begin without ado. ## …
In a [previous article](https://www.daniweb.com/programming/computer-science/tutorials/543028/text-classification-and-summarization-with-deepseek-r1-distill-llama-70b), I presented a comparison of [DeepSeek-R1-Distill-Llama-70b](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) with the [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) for text classification and summarization. Both these models are distilled versions of the original DeepSeek R1 model. Recently, I wanted to try the original version of the DeepSeek R1 model using the DeepSeek API. However, I was …
In the [last article](https://www.daniweb.com/programming/computer-science/tutorials/542973/benchmarking-deepseek-r1-for-text-classification-and-summarization#post2300447), I explained how you can use the [DeepSeek-R1-Distill-Qwen-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) model for text classification and summarization problems. In this article, we will use the [DeepSeek-R1-Distill-Llama-70b](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) for the same tasks. Following results from the [DeepSeek-AI's official paper](https://arxiv.org/pdf/2501.12948) show that `DeepSeek-R1-Distill-Llama-70b` outperform the other distilled models on 4 out of …
In my previous article, I explained how to fine-tune [OpenAI GPT-4o model for natural language processing tasks](https://www.daniweb.com/programming/computer-science/tutorials/542333/how-to-fine-tune-the-openai-gpt-4o-model-the-wait-is-finally-over). In OpenAI DevDay, held on October 1, 2024, OpenAI announced that users can now fine-tune OpenAI vision and multimodal models such as GPT-4o and GPT-4o mini. The best part is that fine-tuning vision …
DeepSeek-R1 is a groundbreaking family of reinforcement learning (RL)-driven AI models developed by the Chinese AI firm [DeepSeek](https://www.deepseek.com/). It is designed to rival industry leaders like OpenAI and Google in complex decision-making and optimization problems. In this article, we will benchmark the DeepSeek R1 model for text classification and summarization …
Open-source LLMs are gaining significant traction due to their ability to match the performance of advanced proprietary LLMs. These models are free to use and allow users to modify their source code or fine-tune them on their own systems, making them highly versatile for various applications. Alibaba's [Qwen](https://www.alibabacloud.com/en/solutions/generative-ai/qwen?_p_lc=1) and Meta's …
In a previous article, I explained [how to extract tabular data from PDF image documents using Multimodal Google Gemini Pro](https://www.daniweb.com/programming/computer-science/tutorials/541449/pdf-image-table-extractor-web-app-with-google-gemini-pro-and-streamlit#post2296083). However, there are a couple of disadvantages with Google Gemini Pro. First, Google Gemini Pro is not free, and second, it needs complex prompt engineering to retrieve table, columns, and …
On November 20, 2024, OpenAI updated its GPT-4o model, claiming it is more creative and accurate on several benchmarks. In this article, I compare the GPT-4o November update with the previous version (August update) for text summarization and classification tasks. By the end of this article, you will see whether …
In one of my previous articles, you saw a [comparison of GPT-4o vs. Claude 3.5 sonnet for zero-shot text classification](https://www.daniweb.com/programming/computer-science/tutorials/542132/comparing-gpt-4o-vs-claude-3-5-sonnet-for-zero-shot-text-classification). In that article; we performed multi-class text classification where input tweets belonged to one of the three categories. In this article, we will go a step further and perform zero-shot …
On September 25, 2024, Meta released [the Llama 3.2 series of multimodal models](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/). The models are lightweight yet extremely powerful for image-to-text and text-to-text tasks. In this article, you will learn how to use the Llama 3.2 Vision Instruct model for general image analysis, graph analysis, and facial sentiment prediction. …
This article explains how to create a retrieval augmented generation (RAG) chatbot in LangChain using open-source models from [Hugging Face serverless inference API](https://huggingface.co/docs/api-inference/en/index). You will see how to call large language models (LLMs) and embedding models from Hugging Face serverless inference API using LangChain. You will also see how to …
Hi Everyone I am a first year student but I started thinking about a third year project. I would like to know something that is programming related and less algorithms I am currently tring to master C and then go and learn how to develop GUI for C programs. And …
Open-source LLMS, owing to their comparable performance with advanced proprietary LLMs, have been gaining immense popularity lately. Open-source LLMs are free to use, and you can easily modify their source code or fine-tune them on your systems. [Alibaba's Qwen](https://www.alibabacloud.com/en/solutions/generative-ai/qwen?_p_lc=1) and [Meta's Llama](https://ai.meta.com/blog/meta-llama-3-1/) series of models are two major players in …
In one of my previous articles, I explained [how to generate stunning images for free using diffusion models](https://www.daniweb.com/programming/computer-science/tutorials/541898/generate-stunning-ai-images-for-free-using-diffusion-models) and showed how to generate Stability AI's diffusion models for text-to-image generation. Since then, the AI domain has progressed considerably, particularly in image generation. Black Forest Labs has released [Flux.1 series of …
On September 19, 2024, [Alibaba released the Qwen 2.5 series of models](https://qwenlm.github.io/blog/qwen2.5/). The Qwen 2.5-72B base and instruct models outperformed larger state-of-the-art models like Llama 3.1-405B on multiple benchmarks. It is safe to assume that Qwen 2.5-72B is a state-of-the-art open-source large language model. This article will show you how …
## Introduction ## In a previous article, I explained [how to fine-tune the vision transformer model for image classification in PyTorch](https://www.daniweb.com/programming/computer-science/tutorials/540749/fine-tuning-vision-transformer-for-image-classification-in-pytorch). In this article, I will explain how to fine-tune the pre-trained OpenAI Whisper model for audio classification in PyTorch. Audio classification is an important task that can be applied …
The AI wave has introduced a myriad of exciting applications. While text generation and natural language processing are leading the AI revolution, image, and vision-based technologies are quickly catching up. The intersection of text and vision applications has seen a rapid surge recently. In this article, you'll learn how to …
Large language models (LLMS) are trained to predict the next token (set of characters) following an input sequence of tokens. This makes LLMs suitable for unstructured textual responses. However, we often need to extract structured information from unstructured text. With the Python [LangChain](https://www.langchain.com/) module, you can extract structured information in …
Retrieval augmented generation (RAG) allows large language models (LLMs) to answer queries related to the data the models have not seen during training. In my previous article, I explained [how to develop RAG systems using the Claude 3.5 Sonnet model](https://www.daniweb.com/programming/computer-science/tutorials/542136/retrieval-augmented-generation-with-claude-3-5-sonnet). However, RAG systems only answer queries about the data stored …
On August 20, 2024, [OpenAI enabled GPT-4o fine-tuning](https://openai.com/index/gpt-4o-fine-tuning/) in the OpenAI playground and the OpenAI API. The much-awaited feature is free for fine-tuning 1 million daily tokens until September 23, 2024. In this article, I will show you how to fine-tune the OpenAI GPT-4o model for text classification and summarization …
In a previous article, I compared [GPT-4o mini vs. GPT-4o and GPT-3.5 Turbo for zero-shot text summarization](https://www.daniweb.com/programming/computer-science/tutorials/542208/gpt-4o-mini-vs-gpt-4o-vs-gpt-3-5-turbo-for-text-summarization). The results showed that the GPT-4o mini achieves almost similar performance for zero-shot text classification at a much-reduced price compared to the other models. I will compare Meta Llama 3.1 70b with OpenAI …
In my previous articles, I presented a [comparison of OpenAI GPT-4o mini model with GPT-4o and GPT-3.5 turbo models for zero-shot text classification](https://www.daniweb.com/programming/computer-science/tutorials/542182/gpt-4o-mini-a-cheaper-and-faster-alternative-to-gpt-4o). The results showed that GPT-4o mini, while significantly cheaper than its counterparts, achieves comparable performance. On 8 August 2024, OpenAI enabled GPT-4o mini fine-tuning for developers across …
In my previous [article on GPT-4o mini](https://www.daniweb.com/programming/computer-science/tutorials/542182/gpt-4o-mini-a-cheaper-and-faster-alternative-to-gpt-4o), I compared the performance of GPT-4o mini against GPT-3.5 Turbo and GPT-4o for zero-shot text classification. We saw that GPT-4o mini, being 36% times cheaper, achieves only 2% less accuracy than GPT-4o. Furthermore, while being 1/3 of the price, the GPT-4o mini significantly …
On July 18th, 2024, [OpenAI released GPT-4o mini](https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/), their most cost-efficient small model. GPT-4o mini is around 60% cheaper than GPT-3.5 Turbo and around 97% cheaper than GPT-4o. As per OpenAI, GPT-4o mini outperforms GPT-3.5 Turbo on almost all benchmarks while being cheaper. In this article, we will compare the …
In my article on [Image Analysis Using OpenAI GPT-4o Model](https://www.daniweb.com/programming/computer-science/tutorials/542030/image-analysis-using-openai-gpt-4o-model), I explained how GPT-4o model allows you to analyze images and answer questions related images precisely. In this article, I will show you how to analyze images with the [Anthropic Claude 3.5 Sonnet](https://www.anthropic.com/news/claude-3-5-sonnet) model, which has shown state-of-the-art performance for …
Imagine my surprise when I learned this morning that an [URL="http://news.cnet.com/8301-13924_3-10216733-64.html?part=rss&subj=news&tag=2547-1_3-0-5"]IBM researcher believes[/URL] that Moore's Law-- that the number of transistors on a micro processor would double nearly every two years-- could be nearing the end of its run. Amazingly Moore made this prediction in 1965 and his law has …
I want to learn ethical hacking anyone's here to help me?
Are you interested in finding out what a YouTube channel mostly discusses? Do you want to analyze YouTube videos of a specific channel? If yes, we are in the same boat. YouTube video titles are a great way to determine the channel's primary focus. Plotting a word cloud or a …
In my [previous article](https://www.daniweb.com/programming/computer-science/tutorials/542132/comparing-gpt-4o-vs-claude-3-5-sonnet-for-zero-shot-text-classification) I presented results comparing Anthropic [Claude 3.5 Sonnet](https://www.anthropic.com/news/claude-3-5-sonnet) and [OpenAI GPT-4o](https://openai.com/index/hello-gpt-4o/) models for zero-shot text classification. The results showed that the Claude 3.5 Sonnet significantly outperformed GPT-4o. These results motivated me to develop a simple retrieval augmented generation system with [LangChain](https://www.langchain.com/) that enables the Claude 3.5 …
On June 20, 2024, Anthropic released the [Claude 3.5 sonnet](https://www.anthropic.com/news/claude-3-5-sonnet) large language model. Claude claims it to be the state-of-the-art model for many natural language processing tasks, surpassing the [OpenAI GPT-4o model](https://openai.com/index/hello-gpt-4o/). My first test for comparing two large language models is their zero-shot text classification ability. In this article, …
As a data scientist, I have extensively used the Hugging Face library for processing unstructured data such as images, text, and audio. My previous blogs have covered various transformer models for these types of data. Lately, however, I discovered that Hugging Face also provides transformer models for tabular data. One …
# Comparison Between Fine-tuned and Default GPT-3 Turbo for Text Classification In one of my previous articles, I showed you how to perform [zero-shot text classification using OpenAI GPT-4o and Meta Llama 3 models](https://www.daniweb.com/programming/computer-science/tutorials/542001/openai-gpt-4o-vs-meta-llama-3-for-zero-shot-text-classifiation). I used the default models for predicting sentiments of airline tweets. The default models perform substantially …
OpenAI announced the [GPT-4o (omni)](https://community.openai.com/t/announcing-gpt-4o-in-the-api/744700) model on May 13, 2024. The GPT-4o model, as the name suggests, can process multimodal inputs, such as text, image, and speech. As per OpenAI, GPT-4o is the state-of-the-art and best-performing large language model. Among GPT-4o's many capabilities, I found its ability to analyze images …
On April 18, 2024, Meta AI released [Llama 3](https://ai.meta.com/blog/meta-llama-3/), which they claimed to be the most capable openly available LLM to date. Concurrently, OpenAI announced [GPT-4o (omni)](https://community.openai.com/t/announcing-gpt-4o-in-the-api/744700) on May 13, 2024, which is touted as the state-of-the-art proprietary model for various NLP benchmarks. As a guy who loves to compare …
In this tutorial, you will see how to generate stunning AI-generated images from text inputs using state-of-the-art diffusion models from [Hugging Face](https://huggingface.co/). You'll learn about base diffusion models and how combining them with a refiner creates even more detailed, refined results. Diffusion models are powerful because they iteratively refine an …
## Introduction Text-to-speech (TTS) technology has revolutionized how we interact with devices, making accessing content through auditory means easier. TTS is vital in various applications such as virtual assistants, audiobooks, accessibility tools for the visually impaired, and language learning platforms. This tutorial will explore how to convert text-to-speech using Hugging …
The advent of large language models (LLM) has replaced complex scripts with natural language for automating various tasks. You can now use LLM to interact with your databases using natural language, which makes life easier for people who do not have sufficient SQL knowledge. In this article, you will learn …
The End.