In this article, let us build a simple Machine Learning model for Breast cancer Detection. This is a beginner-friendly project so if you are exploring classification algorithms, this will help you to understand them better.
To learn the overview of Machine Learning – https://copyassignment.com/machine-learning-a-gentle-introduction/
Problem Statement
Over the past decade, machine learning techniques have been widely used in intelligent health systems, particularly for breast cancer diagnosis and prognosis. Breast Cancer is one of the most common cancers globally. So with the help of Machine Learning, we can build a model to classify the type of cancer, so it will be easy for doctors to provide treatment at the right time. Early diagnosis of breast cancer can dramatically improve prognosis and chances of survival, as it can promote timely clinical treatment of patients. This is a Classification problem and the main goal is to build the model which classifies between Malignant and Benign types of Cancer.
Steps in building our Machine Learning Model
This is a beginner Machine Learning project, so we will try to build our model in an easy and simple way. Let us start our project by examining the steps required to build the Machine Learning model for breast cancer detection.
- Importing Libraries & Loading Dataset
- Exploratory analysis of data
- Data Preprocessing
- Building machine learning models
- Prediction of outcome
Importing the required Libraries
As the first step, let us import the libraries required for the project. If you are not having these libraries, kindly install them using the following commands.
#Installing the required libraries
pip install numpy
pip install pandas
pip install matplotlib
pip install seaborn
If you already have the required libraries skip the previous step and continue with importing the libraries directly into our project.
#Importing the required libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
We can download the dataset required for this project from Kaggle. Kaggle is a place where we can find thousands of datasets to use for our projects. It is a great platform that has many machine learning competitions and provides real-world datasets. You can work on this project in the Jupyter notebook integration provided in the Kaggle itself. For that sign in to Kaggle and create your account.
There are 30 numeric, predictive attributes in the dataset of Breast Cancer Detection. The Information of some of the attributes are given below:
– radius (mean of distances from the center to points on the perimeter)
– texture (standard deviation of gray-scale values)
– perimeter
– area
– smoothness (local variation in radius lengths)
– compactness (perimeter^2 / area – 1.0)
– concavity (severity of concave portions of the contour)
– concave points (number of concave portions of the contour)
– symmetry
– fractal dimension (“coastline approximation” – 1)
Loading the data
After importing the libraries, we have to load the data into our project.
If you are using google collab you have to first upload the dataset to access that data. So to upload the dataset run the following command:
#Load the dataset
from google.colab import files
uploaded = files.upload ()
If you are using Jupyter notebook or notebook provided in Kaggle, we can use the read_csv method in the pandas library to import the dataset.
#Importing the dataset
df=pd.read_csv (“data.csv”)
Explore the data
In this step, we will explore our data to understand more about the data. We can check the shape of the dataset, missing values in the data, and other information.
Let us start examining the dataset using the head() method in the pandas library. The head() method displays the rows in the dataset up to the value in the argument. The default parameter of the head() method is 5 rows.
#Displays top 5 rows in the dataset
df.head ()
Output:
Let us explore the dataset and see the number of rows and columns in the data set. We can find the dimensions of the dataset using the shape method in the pandas library.
#Displays dimensions of the dataset
df.shape
Output:
(569,32)
We can see that there are 569 rows of data which means there are 569 people in this data and 33 columns which means there are 33 features or data points for each person.
#Displays column names in the dataset
df.columns
Output:
Index([‘id’, ‘diagnosis’, ‘radius_mean’, ‘texture_mean’, ‘perimeter_mean’, ‘area_mean’, ‘smoothness_mean’, ‘compactness_mean’, ‘concavity_mean’, ‘concave points_mean’, ‘symmetry_mean’, ‘fractal_dimension_mean’, ‘radius_se’, ‘texture_se’, ‘perimeter_se’, ‘area_se’, ‘smoothness_se’, ‘compactness_se’, ‘concavity_se’, ‘concave points_se’, ‘symmetry_se’, ‘fractal_dimension_se’, ‘radius_worst’, ‘texture_worst’, ‘perimeter_worst’, ‘area_worst’, ‘smoothness_worst’, ‘compactness_worst’, ‘concavity_worst’, ‘concave points_worst’, ‘symmetry_worst’, ‘fractal_dimension_worst’, ‘Unnamed: 32′], dtype=’object’)
#Displays information about column attributes
df.info()
Output:
<class 'pandas.core.frame.DataFrame'> RangeIndex: 569 entries, 0 to 568 Data columns (total 33 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 id 569 non-null int64 1 diagnosis 569 non-null object 2 radius_mean 569 non-null float64 3 texture_mean 569 non-null float64 4 perimeter_mean 569 non-null float64 5 area_mean 569 non-null float64 6 smoothness_mean 569 non-null float64 7 compactness_mean 569 non-null float64 8 concavity_mean 569 non-null float64 9 concave points_mean 569 non-null float64 10 symmetry_mean 569 non-null float64 11 fractal_dimension_mean 569 non-null float64 12 radius_se 569 non-null float64 13 texture_se 569 non-null float64 14 perimeter_se 569 non-null float64 15 area_se 569 non-null float64 16 smoothness_se 569 non-null float64 17 compactness_se 569 non-null float64 18 concavity_se 569 non-null float64 19 concave points_se 569 non-null float64 20 symmetry_se 569 non-null float64 21 fractal_dimension_se 569 non-null float64 22 radius_worst 569 non-null float64 23 texture_worst 569 non-null float64 24 perimeter_worst 569 non-null float64 25 area_worst 569 non-null float64 26 smoothness_worst 569 non-null float64 27 compactness_worst 569 non-null float64 28 concavity_worst 569 non-null float64 29 concave points_worst 569 non-null float64 30 symmetry_worst 569 non-null float64 31 fractal_dimension_worst 569 non-null float64 32 Unnamed: 32 0 non-null float64 dtypes: float64(31), int64(1), object(1)We can Continue exploring the data and get a count of all of the columns that contain empty (NaN, NAN, na) values.
#Count the empty values in each column
df.isna ().sum()None of the columns contain any empty values except the column named ‘Unnamed: 32’, which contains 569 empty values. So we can drop that column from the original data set since it adds no value to build the model.
#To drop the column with missing value
df=df.drop (‘Unnamed: 32’,axis=1)Diagnosis is the column that we are going to predict with the help of other columns. Let us explore the different possible values in that column
#Prints unique values in Diagnosis column
df[‘diagnosis’].unique()Output:
array([‘M’, ‘B’], dtype=object)
In which M means malignant and B means Benign type of cancer.
#Count of unique values in Diagnosis column
df[‘diagnosis’].value_counts()Output:
B 357
M 212
Name: diagnosis, dtype: int64We can identify that out of 569 people, 357 are labeled as B(Benign) and 212 are labeled as M(Malignant)
#Convert column names to a list
l=list (df.columns)
print (l)Output:
[‘diagnosis’, ‘radius_mean’, ‘texture_mean’, ‘perimeter_mean’, ‘area_mean’, ‘smoothness_mean’,
‘compactness_mean’, ‘concavity_mean’, ‘concave points_mean’, ‘symmetry_mean’,
‘fractal_dimension_mean’, ‘radius_se’, ‘texture_se’, ‘perimeter_se’, ‘area_se’, ‘smoothness_se’,
‘compactness_se’, ‘concavity_se’, ‘concave points_se’, ‘symmetry_se’, ‘fractal_dimension_se’,
‘radius_worst’, ‘texture_worst’, ‘perimeter_worst’, ‘area_worst’, ‘smoothness_worst’, ‘compactness_worst’, ‘concavity_worst’, ‘concave points_worst’, ‘symmetry_worst’, ‘fractal_dimension_worst’]We can check the information about the data such as mean, Standrad Deviation, Minimum value, Maximum value, etc., using the describe method.
#summary of all numeric columns
df.describe()This displays the summary of the columns including the following information
count
mean
std
min
25%
50%
75%
maxVisualize the data
The next step is to visualize the information to analyze the data. Data visualization is the graphical representation that contains the information and the data. Visualization of data helps to understand the data better.
countplot() method in the seaborn library is used to show the counts of observations in each category using bars.
#Showing the total count of malignant and benign patients in a counterplot
sns.countplot (df[‘diagnosis’]);Heatmaps visualize data through variations in coloring. When applied to a tabular format, Heatmaps are useful for cross-examining multivariate data, through placing variables in the rows and columns and coloring the cells within the table. To find a correlation between each feature and target we visualize heatmap using the correlation matrix.
A correlation heatmap is a heatmap that shows a 2D correlation matrix between two discrete dimensions, using colored cells to represent data. The values of the first dimension appear as the rows of the table while the second dimension is a column.
#Heatmap of Correlation
corr = df.corr()
plt.figure (figsize= (8,8))
sns.heatmap (corr);Encode the Categorical data
As a next step we are going to encode the categorical data. Categorical data are variables that contain label values instead of numeric values. We can convert them into numeric data for a better predictive model.
df [‘diagnosis’] = df [‘diagnosis’].map ({‘M’:1,’B’:0})
df [‘diagnosis’].unique()Output:
array ([1, 0], dtype=int64)
So, We have encoded the categorical data Malignant type (M) as 1 and Benign type (B) as 0.
Splitting the dataset
The data has to be usually split into training and testing parts. The training set contains the data with known outputs to help the model learn. Another set of data known as the test set contains data whose output will be predicted by the model. The breaking of data should be 80:20 or 70:30 ratio approximately. The larger part is for training purposes and the smaller part is for testing purposes. This is more important because using the same data for training and testing would not produce good results.
train_test_split method in Sci-kit library is used for this purpose of splitting the data
#Splitting the data into the Training and Testing set
x = df.drop (‘diagnosis’,axis=1)
y = df [‘diagnosis’]
from sklearn.model_selection
import train_test_split
x_train,x_test,y_train,y_test = train_test_split (x,y,test_size=0.3)Next, we can check the shape of the training data and testing data.
x_train.shape
Output:
(398, 30)
x_test.shape
Output:
(171, 30)
We can see that the training and testing data are correctly split in the ratio of 70% and 30%.
Feature Scaling
Our dataset may contain features highly varying in magnitudes, range, and units. We need to bring all features to the same level of magnitudes. This can be done by Scaling the data, which means fitting the data within a specific range(example: 0-1).
Let us use the StandardScaler method in Scikit-Learn Library for scaling our data.
#Feature Scaling of data
from sklearn.preprocessing
import StandardScaler ss = StandardScaler()
x_train = ss.fit_transform (x_train)
x_test = ss.fit_transform (x_test)Model selection
We have the clean data to build our model. But we have to find which Machine learning algorithm is best for the data. The output is a categorical format so we will use supervised classification machine learning algorithms. To build the best model, we have to train and test the dataset with multiple Machine Learning algorithms then we can find the best Machine learning model. We are going to fit our model on 4 different classification algorithms namely Logistic Regression, Decision Tree Classifier, Random forest classifier, and Support Vector Machine. And use the algorithm with the highest accuracy among all for our model.
Logistic Regression
Logistic Regression is a Machine Learning classification algorithm that is used to predict the probability of a categorical dependent variable. In logistic regression, the dependent variable is a binary variable that contains data coded as 1 or 0.
#Importing Logistic Regression from Scikit learn library
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
#Loading the training data in the model
lr.fit (x_train, y_train)Output:
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, l1_ratio=None, max_iter=100, multi_class=’auto’, n_jobs=None, penalty=’l2′ random_state=None, solver=’lbfgs’, tol=0.0001, verbose=0, warm_start=False)
#Predicting output with the test data
y_pred = lr.predict (x_test)
y_predOutput:
array([1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0], dtype=int64)y_test
Output:
26 1
483 0
357 0
130 0
440 0
309 0
..
155 0
12 1
81 0
116 0
Name: diagnosis, Length: 171, dtype: int64We can use the accuracy_score() function provided by Scikit-Learn to determine the accuracy rate of our model with Logistic regression
#Accuracy Score of Logistic Regression
from sklearn.metrics import accuracy_score
print(“Accuracy Score of Logistic Regression: “)
print (accuracy_score (y_test,y_pred))Output:
Accuracy Score of Logistic Regression: 0.9824561403508771
Decision Tree Classifier
Decision Tree Classifier takes input as two arrays: an array X, sparse or dense, of shape
(n_samples, n_features)
holding the training samples, and an array Y of integer values, shape(n_samples,)
, holding the class labels for the training samples:#Importing from Decision Tree Classifier from Scikit learn library
from sklearn.tree import DecisionTreeClassifier
dtc = DecisionTreeClassifier()
#Loading the training data in the model
dtc.fit (x_train,y_train)Output:
DecisionTreeClassifier(ccp_alpha=0.0, class_weight=None, criterion=’gini’, max_depth=None, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2,min_weight_fraction_leaf=0.0, presort=’deprecated’, random_state=None, splitter=’best’)
#Predicting output with the test data
y_pred=dtc.predict (x_test)
y_predOutput:
array([1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0], dtype=int64)We can use the accuracy_score() function provided by Scikit-Learn to determine the accuracy rate of our model with Decision Tree classifier algorithm.
#Accuracy Score of Decision Tree Classifier
from sklearn.metrics import accuracy_score
print(“Accuracy Score of Decision Tree Classifier : “)
print(accuracy_score (y_test,y_pred))Output:
Accuracy Score of Decision Tree Classifier : 0.9239766081871345
Random forest classifier
Random Forest is a classifier that contains a number of decision trees on various subsets of the given dataset and takes the average to improve the predictive accuracy of that dataset. Here, we are using the RandomForestClassifier method of ensemble class to implement the Random Forest Classification algorithm
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier()
#Loading the training data in the model
rfc.fit (x_train,y_train)Output:
RandomForestClassifier(bootstrap=True, ccp_alpha=0.0, class_weight=None, criterion=’gini’, max_depth=None, max_features=’auto’, max_leaf_nodes=None, max_samples=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None, oob_score=False, random_state=None, verbose=0, warm_start=False)
#Predicting output with test data
y_pred = rfc.predict (x_test)
y_predOutput:
array([1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0], dtype=int64)We can use the accuracy_score() function provided by Scikit-Learn to determine the accuracy rate of our model with the Random Forest classifier algorithm.
#Accuracy Score of Random Forest Classifier
from sklearn.metrics import accuracy_score
print(” Accuracy Score of Random Forest Classifier : “)
print ( accuracy_score (y_test,y_pred)Output:
Accuracy Score of Random Forest Classifier: 0.9473684210526315
Support vector classifier
Now, let us implement our model using the Support vector classifier (SVC). As other classifiers, SVC take input as two arrays: an array
X
of shape(n_samples, n_features)
holding the training samples, and an arrayy
of class labels (strings or integers), of shape(n_samples)
:from sklearn import svm
svc = svm.SVC ()
#Loading the training data in the model
svc.fit (x_train,y_train)Output
SVC(C=1.0, break_ties=False, cache_size=200, class_weight=None, coef0=0.0, decision_function_shape=’ovr’, degree=3, gamma=’scale’, kernel=’rbf’, max_iter=-1, probability=False, random_state=None, shrinking=True, tol=0.001, verbose=False)
#Predicting output with test data
y_pred = svc.predict (x_test)
y_predOutput:
array([1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0], dtype=int64)Let us use the accuracy_score() function provided by Scikit-Learn to determine the accuracy rate of our model with the Support Vector classifier algorithm.
#Accuracy Score of Support vector classifier
from sklearn.metrics import accuracy_score
print(” Accuracy Score of Support vector classifier: “)
print (accuracy_score (y_test,y_pred ))Output:
Accuracy Score of Support vector classifier : 0.9824561403508771
From the accuracy and metrics above, the model that performed the best on the test data was the Support vector Classifier with an accuracy score of about 98.2%. So let’s choose that model to detect cancer cells in patients. Make the prediction/classification on the test data and show both the Support vector Classifier model classification/prediction and the actual values of the patient that shows rather or not they have cancer.
And yay! we have successfully completed our Machine learning project on Breast Cancer Detection. Hope you have enjoyed doing this project!
Also Read:
- Simple Code to compare Speed of Python, Java, and C++?
- Falling Stars Animation on Python.Hub October 2024
- Most Underrated Database Trick | Life-Saving SQL Command
- Python List Methods
- Top 5 Free HTML Resume Templates in 2024 | With Source Code
- How to See Connected Wi-Fi Passwords in Windows?
- 2023 Merry Christmas using Python Turtle
- 23 AI Tools You Won’t Believe are Free
- Python 3.12.1 is Now Available
- Best Deepfake Apps and Websites You Can Try for Fun
- Amazon launched free Prompt Engineering course: Enroll Now
- 10 GitHub Repositories to Master Machine Learning
- Hello World in 35 Programming Languages
- How to Scrape Data From Any Website with Python?
- Become Job Ready With Free Harvard Computer Science course: Enroll Now
- Free Python Certification course from Alison: Good for Resume
- Download 1000+ Projects, All B.Tech & Programming Notes, Job, Resume & Interview Guide, and More – Get Your Ultimate Programming Bundle!
- Udacity Giving Free Python Course: Here is how to Enroll
- Love Babbar’s Income Revealed
- Top 5 Websites to Learn Programming in 2024
- Python Internship for college students and freshers: Apply Here
- Microsoft Giving Free Python Course in 2023: Enroll Now
- Top 5 Free Python Courses on YouTube in 2024
- Complete Python Roadmap for Beginners in 2024
- New secrets to Earn money with Python in 2024
- Connect with HR Directly – Job Hack
- Google offering free Python course: Enroll Today
- What is an AI Tool?
- Google Internship 2024
- TCS Launched Free Certification Course with Industry Recognized Value
- Top Free AI Tools for Students and Job Seekers
- Unveiling the Future of AI Detector
- CodeWithHarry Earns 20 Lakhs per month from YouTube?
- Cleaning Service Booking System in Python Tkinter
- Farmers Ecommerce App using Python Tkinter
- Guidelines for Project Collaboration Process
- The system of the binary conversion
- What is web development for beginners?
- Guide to Proxy Servers: How They Work and Why You Need Them?
- Python | Check Armstrong Number using for loop
- Python | Factorial of a number using for loop
- Link in bio
- Microsoft Giving Free Machine Learning Course: Enroll Now
- Accenture Giving Free Developer Certificate in 2023
- Python | Asking the user for input until they give a valid response
- Python | How to iterate through two lists in parallel?
- Amazon Summer Internship 2023
- Python | How to sort a dictionary by value?
- Amazon Giving Free Machine Learning Course with Certificate: Enroll Now
- Google Summer Internship 2023
- Python | How to split a list into equally-sized chunks?
- 5 Secret ChatGPT skills to make money
- Python | Remove items from a list while iterating
- Free Google Certification Courses
- 5 AI tools for coders better than ChatGPT
- Python | How to get dictionary keys as a list
- New secrets to Earn money with Python in 2023
- Flower classification using CNN
- How to represent Enum in Python?
- 5 methods | Flatten a list of lists | Convert nested list into a single list in Python
- What does if __name__ == __main__ do in Python?
- Music Recommendation System in Machine Learning
- Brick Breaker Game in C++
- Dino Game in Java
- Java Games Code | Copy And Paste
- How to utilize ChatGPT to improve your coding skills?
- Python | CRUD operations in MongoDB
- Create your own ChatGPT with Python
- Sudoku Game in C++
- Bakery Management System in Python | Class 12 Project
- Filter List in Python | 10 methods
- Top 25 Pattern Programs in C++
- Currency Converter in C++
- SQLite | CRUD Operations in Python
- Number Guessing Game in C++
- Image background remover in Python
- C++ Project Structure
- Python | Check if a string is a palindrome or not without Recursion
- Python | Check if a number is an Armstrong Number or not using Recursion
- Python | Check if a number is an Armstrong Number or not without using Recursion
- Python | Shuffle a list using recursion
- Python | Shuffle a list without recursion
- Python | Implementing switch case using functions
- Python function to check whether a number is perfect or not
- Python | Find LCM using function
- Python | Find HCF using function
- Python | Convert the binary number to decimal without using library function
- Python | Create a basic operations calculator(+, -, /, and x), create a separate function for each operation
- Python | Detecting the number of local variables declared in a function
- Python | Making a chain of function decorators (bold italic underline etc)
- Python | Access function inside a function
- Event Management System Project in Python
- ATM machine program in C++
- Python | Create a function with a pass statement
- Python | Function to calculate the square root of a number
- Python | A function that calculates the power of a number
- Python | A function that accepts 2 integers and adds them and returns their sum
- Python | Function that takes a list of integers and returns the last integer
- Python | Return multiple values from a function
- Python function that takes a list and returns a new list with unique elements of the first list