How To Solve, Tensorflow.python.framework.errors_impl.invalidargumenterror?
import tensorflow as tf import numpy as np from sklearn.model_selection import train_test_split np.random.seed(4213) data = np.random.randint(low=1,high=29, size=(500, 160, 160,
Solution 1:
Your issue comes from the size of the last layer (to avoid these mistakes it is always desirable to use python constants for N_IMAGES
, WIDTH
, HEIGHT
, N_CHANNELS
and N_CLASSES
):
For image classification
You should assign one single label to each image. Try switching labels
:
import tensorflow as tf
import numpy as np
from sklearn.model_selection import train_test_split
np.random.seed(4213)
N_IMAGES, WIDTH, HEIGHT, N_CHANNELS = (500, 160, 160, 10)
N_CLASSES = 5
data = np.random.randint(low=1,high=29, size=(N_IMAGES, WIDTH, HEIGHT, N_CHANNELS))
labels = np.random.randint(low=0,high=N_CLASSES, size=(N_IMAGES))
#...
For semantic segmentation
Make sure your classifier (last layers of the network) is sized accordingly. In this case you need 1 class per pixel:
#...
model = tf.keras.Sequential()
model.add(arch)
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(width * height))
model.add(tf.keras.layers.Reshape([width , height]))
#...
This is the simplest you can get. Instead, you can set up multiple deconvolution layers to act as classifier, or you can even flip the arch
architecture over and use it to generate the classification results. Orthogohally, you can perform one_hot
encoding on the labels and thus expand them by a factor of N_CLASSES
, effectively multiplying the number of neurons in the last layer.
Post a Comment for "How To Solve, Tensorflow.python.framework.errors_impl.invalidargumenterror?"