python - Train model using queue Tensorflow -
i designed neural network in tensorflow regression problem following , adapting tensorflow tutorial. however, due structure of problem (~300.000 data points , use of costful ftrloptimizer), problem took long execute 32 cpus machine (i don't have gpus).
according this comment , quick confirmation via htop, appears have single-threaded operations , should feed_dict.
therefore, adviced here, tried use queues multi-threading program.
i wrote simple code file queue train model following:
import numpy np import tensorflow tf import threading #function enqueueing in parallel data def enqueue_thread(): sess.run(enqueue_op, feed_dict={x_batch_enqueue: x, y_batch_enqueue: y}) #set number of couples (x, y) use "training" model batch_size = 5 #generate data y=x+1+little_noise x = np.random.randn(10, 1).astype('float32') y = x+1+np.random.randn(10, 1)/100 #create variables model y = x*w+b, w , b should both converge 1. w = tf.get_variable('w', shape=[1, 1], dtype='float32') b = tf.get_variable('b', shape=[1, 1], dtype='float32') #prepare placeholdeers enqueueing x_batch_enqueue = tf.placeholder(tf.float32, shape=[none, 1]) y_batch_enqueue = tf.placeholder(tf.float32, shape=[none, 1]) #create queue q = tf.randomshufflequeue(capacity=2**20, min_after_dequeue=batch_size, dtypes=[tf.float32, tf.float32], seed=12, shapes=[[1], [1]]) #enqueue operation enqueue_op = q.enqueue_many([x_batch_enqueue, y_batch_enqueue]) #dequeue operation x_batch, y_batch = q.dequeue_many(batch_size) #prediction linear model + bias y_pred=tf.add(tf.mul(x_batch, w), b) #mae cost function cost = tf.reduce_mean(tf.abs(y_batch-y_pred)) learning_rate = 1e-3 train_op = tf.train.gradientdescentoptimizer(learning_rate).minimize(cost) init = tf.initialize_all_variables() sess = tf.session() sess.run(init) available_threads = 1024 #feed queue in range(available_threads): threading.thread(target=enqueue_thread).start() #train model step in range(1000): _, cost_step = sess.run([train_op, cost]) print(cost_step) wf=sess.run(w) bf=sess.run(b)
this code doesn't work because each time call x_batch, 1 y_batch dequeued , vice versa. then, not compare features corresponding "result".
is there easy way avoid problem ?
my mistake, worked fine. misled because estimated @ each step of algorithm performance on different batches , because model complicated dummy 1 (i should had y=w*x or y=x+b). then, when tried print in console, exucuted several times sess.run on different variables , got non-consistent results.
Comments
Post a Comment