The Iris dataset
Scikit learn comes with a few standard datasets. One of those is the famous Iris dataset, which was first introduced by Statistician Sir R. A. Fisher in 1936.
This dataset is used to address a simple classification problem where we have to predict the species (Setosa, Versicolor or Virginica) of an iris flower, given a set of measurements (sepal length, sepal width, petal length and petal width) in centimetres.
The Iris dataset has 150 instances of Iris flowers for each of which we have the above four measurements (features) and the species code (response).
The response is in the form of a species code (0,1 and 2 for Setosa, Versicolor and Virginica respectively). This makes it easy for us to use it in scikit learn, as according to the above requirements both feature and response data should be numeric.
Let us get the Iris dataset from the “datasets” submodule of scikit learn library and save it in an object called “iris” using the following commands:
from sklearn import datasets
The “iris” object belongs to the class Bunch i.e. it is a collection of various objects bunched together in a dictionary-like format. These objects include the feature matrix “data” and the target vector “target”.We will save these in objects X and y respectively:
#storing feature matrix in “X”
#storing target vector in “y”
Let us now check the type and shape of these two objects:
#Printing the type of X and y to check if they meet the NumPy
print(” type of X:”,type(X),”\n”,”type of y:”,type(y))
#Printing the shape of X and y to check if their sizes are compatible
print(” shape of X:”,X.shape,”\n”,”shape of y:”,y.shape)
type of X: <class ‘numpy.ndarray’>
type of y: <class ‘numpy.ndarray’>
shape of X: (150, 4)
shape of y: (150,)
We see that X and y are of the type numpy ndarray, where X has 150 instances with four features and y is a one-dimensional array with 150 values.
Great! We see that all the three requirements for using X and y in
scikit learn as the feature matrix and response vector are satisfied.
Stay tuned for the next installment in which the author will discuss splitting the data into training and test sets.
Disclosure: Interactive Brokers
Information posted on IBKR Traders’ Insight that is provided by third-parties and not by Interactive Brokers does NOT constitute a recommendation by Interactive Brokers that you should contract for the services of that third party. Third-party participants who contribute to IBKR Traders’ Insight are independent of Interactive Brokers and Interactive Brokers does not make any representations or warranties concerning the services offered, their past or future performance, or the accuracy of the information provided by the third party. Past performance is no guarantee of future results.
This material is from QuantInsti and is being posted with permission from QuantInsti. The views expressed in this material are solely those of the author and/or QuantInsti and IBKR is not endorsing or recommending any investment or trading discussed in the material. This material is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation to buy, sell or hold such security. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.