Send Downlink command as JSON via gRPC and Python with use of payload formatter
Hey, I’m trying to send a downlink command via gRPC and Python to an existing device. It works to a point, but I tried to send a JSON, which then should get processed via the payload formatter. The problem is that if I send the JSON as an utf-8 encoded string, it does not get passed through the payload format but gets enqueued directly to the device. I tried to assign the req.queue_item.object, but it gives me an error (“AttributeError: Assignment not allowed to message, map, or repeated field “object” in the protocol message object.”). Is there a way to send a downlink to chirpstack with a JSON so that the payload formatter can encode it for the device?
Are you testing this against ChirpStack v3 or v4?
Hey, We use ChirpStack v4 and the corresponding PyPI library.
@brocaar , is this a bug, or is this functionality not yet implemented?
Where in v3 you had to pass the JSON object as string, you must pass it as JSON object in v4 ( https://github.com/chirpstack/chirpstack/blob/master/api/proto/api/device.proto#L483 ).
@brocaar I know, but if I set it to either a python JSON object or a google.protobuf.Struct it returns:
And what do I do with the data field, I can’t let it be empty.
I’m not familiar with the Python API for Protobuf struct types, but maybe this could help you? dictionary - How do you create a Protobuf Struct from a Python Dict? - Stack Overflow
@brocaar I tried that before but same message:
I think there is something else wrong. Maybe the word object is conflicting with something in python? Or something went wrong with the code generation for python. But I honestly don’t know how to debug this.
Found the solution. If you use:
Instead of:
Then everything works.
Can I add a new node in the middle of a Tensorflow model graph using the graphsurgeon tool?
Hello, I’m using TensorRT version 4.0.1.6.
I successfully replaced an unsupported Tensorflow operation with a TensorRT PlugIn layer using the graphsurgeon APIs.
Now, I want to add a totally new node in the middle of the model.
I saw that the graphsurgeon support these APIs: create_node, create_plugin_node, extend, append
But I couldn’t understand if I can use them for my purpose.
I know how to find the specific model location where I want to add my new node but I cannot find the right way to use the graphsurgeon APIs to do that.
Please advise,
You can convert graph_def to GraphSurgeons DynamicGraph, modify it and finally write it as uff:
Thanks juko.lam
My purpose is to add a debug node multiple times in the middle of the grpah in order to implement a simple functionallity: Save to disk the tensor input data.
By this way I will have the ability to investigate the graph nodes tensor outputs data correctness.
So, as I wrote previously, I have the following abilities:
- uff file generation using uff.from_tensorflow_frozen_model API
- Replace an existing nodes in the graph using graphsurgeon API
- Remove an existing nodes in the graph using graphsurgeon API
What I don’t have and till now I couldn’t succeeded to do is to add totally new node somewhere in the graph
Can you provide me please an example of how to add new node in the middle of the graph using graphsurgeon APIs?
Based on my constraints (I have only the frozen graph pb file), Do you have any other idea (not related to the graphsurgeon) to to achieve my purpose?
You can load the frozen pb file which gives you the graph, then convert the graph to DynamicGraph and modify it. So something like this (probably not runnable directly):
I suggest that you inspect graphs created by the TRT examples just before they are written to UFF file.
I tried this to replace the ResizeBilinear with Conv2DTranspose but I got error :
“convert_lanenet_uff.py”, line 32, in add_plugin node.input = next_.input AttributeError: Assignment not allowed to repeated field “input” in protocol message object.
Code snippet : import tensorflow as tf from tensorflow.core.framework import types_pb2 import graphsurgeon as gs
graph = gs.DynamicGraph(‘frozen_graph.pb’)
Add new node
node = tf.NodeDef() node.op = ‘Conv2DTranspose’ node.name = ‘Conv2DTranspose’ graph.append(node)
want to replace ResizeBilinear with Conv2DTranspose layer
next = graph.find_nodes_by_op(“ResizeBilinear”)[0] # assume only one node is found node.input = next.input next.input = [node.name]
python assignment not allowed to field in protocol message object
Navigation Menu
Search code, repositories, users, issues, pull requests..., provide feedback.
We read every piece of feedback, and take your input very seriously.
Saved searches
Use saved searches to filter your results more quickly.
To see all available qualifiers, see our documentation .
- Notifications You must be signed in to change notification settings
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Assignment not allowed to repeated field "conversions" in protocol message object #559
panukuulu commented Jan 11, 2022
Trying to implement code sample using googleads-30.0.0 Seems to be an issue with this line:
returns an error:
My click_conversions looks like:
I suppose one is not allowed to assign to repeated fields but I don't see how to proceed. I think appending with request.conversions.append(click_conversion) creates a malformed request. Anyone got a clue on how to get this to work? Thank you for your help |
The text was updated successfully, but these errors were encountered: |
sakkammadam commented Jan 11, 2022 • edited Loading
You have to use append. This is a snippet of my code - Hope this helps. |
Sorry, something went wrong.
panukuulu commented Jan 12, 2022
Thanks, that works. Brilliant. |
BenRKarl commented Jan 12, 2022
Thanks |
No branches or pull requests
Assignment not allowed to repeated field "conversions" in protocol message object
[email protected]
- Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers
- Advertising & Talent Reach devs & technologists worldwide about your product, service or employer brand
- OverflowAI GenAI features for Teams
- OverflowAPI Train & fine-tune LLMs
- Labs The future of collective knowledge sharing
- About the company Visit the blog
Collectives™ on Stack Overflow
Find centralized, trusted content and collaborate around the technologies you use most.
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Get early access and see previews of new features.
TypeError: 'Tensor' object does not support item assignment in TensorFlow
I try to run this code:
But I get error on last line: TypeError: 'Tensor' object does not support item assignment It seems I can not assign to tensor, how can I fix it?
5 Answers 5
In general, a TensorFlow tensor object is not assignable, so you cannot use it on the left-hand side of an assignment.
The easiest way to do what you're trying to do is to build a Python list of tensors, and tf.stack() them together at the end of the loop:
* With the exception of tf.Variable objects, using the Variable.assign() etc. methods. However, rnn.rnn() likely returns a tf.Tensor object that does not support this method.
- 1 Note that tf.pack() has been replaced by tf.stack() since TensorFlow 1.0. – CNugteren Commented Jun 15, 2017 at 14:20
- I have the same problem with the issue in stackoverflow.com/questions/55871023/… , but I do not know how can I do this with tf.stack. could please guide about this issue? – david Commented Apr 26, 2019 at 16:03
- How would one do this?: for x in range(int(r.shape[2]/2)): for d in range(1,int(r.shape[0]/2)): r[d, :, x, :] = r[0, :, x+d, :] – zendevil Commented Aug 20, 2020 at 0:07
Another way you can do it is like this.
then the output is:
ref: https://www.tensorflow.org/api_docs/python/tf/Variable#assign
- I seem to be getting an error when trying to implement it this way: x = tf.Variable(tf.ones([1,2,2,3], tf.float32)) x = x[:,:,:,1].assign(2.0) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) x_data = sess.run(x) – Moondra Commented Sep 26, 2017 at 20:17
- note: tf.Variable maintains its value across session runs, this is not always the right solution. – Viktor Tóth Commented Aug 15, 2018 at 14:09
- 2 This answer has the worst performance in all speed tests I've done to assign 1D vectors to 2D array. Don't use this method! The fastest so far is mrry's answer with appending to a list and stacking. – Krzysztof Maliszewski Commented Jan 12, 2021 at 23:34
- 3 doesn't work in TF2: EagerTensor object has no attribute 'assign' – Dan D. Commented Oct 6, 2021 at 9:04
When you have a tensor already, convert the tensor to a list using tf.unstack (TF2.0) and then use tf.stack like @mrry has mentioned. (when using a multi-dimensional tensor, be aware of the axis argument in unstack)
- this worked for me. thanks. – rsonx Commented Sep 20, 2023 at 21:39
Neither tf.Tensor nor tf.Variable is element-wise-assignable. There is a trick however which is not the most efficient way of course, especially when you do it iteratively.
You can create a mask and a new_layer tensor with new values and then
do a Hadamard product (element-wise product).
The original * mask part sets the specified values of original to 0 and the second part, new_layer*(1-mask) assigns new_layer tensor whatever you want without modifying the elements assigned to 0 by the mask tensor in the previous step.
Another way is to use numpy instead:
Use Pytorch:
As this comment says, a workaround would be to create a NEW tensor with the previous one and a new one on the zones needed.
- Create a mask of shape outputs with 0's on the indices you want to replace and 1's elsewhere (Can work also with True and False )
- Create new matrix of shape outputs with the new desired value: new_values
- Replace only the needed indexes with: outputs_new = outputs* mask + new_values * (1 - mask)
If you would provide me with an MWE I could do the code for you.
A good reference is this note: How to Replace Values by Index in a Tensor with TensorFlow-2.0
- How would one do this? for x in range(int(r.shape[2]/2)): for d in range(1,int(r.shape[0]/2)): r[d, :, x, :] = r[0, :, x+d, :] – zendevil Commented Aug 20, 2020 at 0:07
Your Answer
Reminder: Answers generated by artificial intelligence tools are not allowed on Stack Overflow. Learn more
Sign up or log in
Post as a guest.
Required, but never shown
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy .
Not the answer you're looking for? Browse other questions tagged python tensorflow or ask your own question .
- The Overflow Blog
- Where does Postgres fit in a world of GenAI and vector databases?
- Mobile Observability: monitoring performance through cracked screens, old...
- Featured on Meta
- We've made changes to our Terms of Service & Privacy Policy - July 2024
- Bringing clarity to status tag usage on meta sites
- What does a new user need in a homepage experience on Stack Overflow?
- Feedback requested: How do you use tag hover descriptions for curating and do...
- Staging Ground Reviewer Motivation
Hot Network Questions
- My toilet has no working parts in the tank. I poured water into the bowl but it didn't work. Can it work without the tank?
- How can I get an Edge's Bevel Weight attribute value via Python?
- Why does flow separation cause an increase in pressure drag?
- How much missing data is too much (part 2)? statistical power, effective sample size
- Encode a VarInt
- about flag changes in 16-bit calculations on the MC6800
- I overstayed 90 days in Switzerland. I have EU residency and never got any stamps in passport. Can I exit/enter at airport without trouble?
- Do the amplitude and frequency of gravitational waves emitted by binary stars change as the stars get closer together?
- How did Oswald Mosley escape treason charges?
- What explanations can be offered for the extreme see-sawing in Montana's senate race polling?
- Does Vexing Bauble counter taxed 0 mana spells?
- Which hash algorithms support binary input of arbitrary bit length?
- Can a 2-sphere be squashed flat?
- How to reply to reviewers who ask for more work by responding that the paper is complete as it stands?
- Historical U.S. political party "realignments"?
- Who was the "Dutch author", "Bumstone Bumstone"?
- If inflation/cost of living is such a complex difficult problem, then why has the price of drugs been absoultly perfectly stable my whole life?
- Manifest Mind vs Shatter
- Is it possible to have a planet that's gaslike in some areas and rocky in others?
- Stuck on Sokoban
- How does the summoned monster know who is my enemy?
- Parody of Fables About Authenticity
- Command-line script that strips out all comments in given source files
- What happens if all nine Supreme Justices recuse themselves?
COMMENTS
I don't know protocol-buffers but I took a look at the docs and it says: You cannot assign a value to an embedded message field. Instead, assigning a value to any field within the child message implies setting the message field in the parent. So I'm assuming this should work:
As per the documentation, you aren't able to directly assign to a repeated field. In this case, you can call extend to add all of the elements in the list to the field. Similarly, for adding a single value, use append(), e.g. person.id.append(1). This applies for any protobuf repeated fields.
Here I get the error: Assignment not allowed to composite field "task" in protocol message object. programatically I can import this module and assign values well enough, for example in the python shell: >> import importlib. >> oobj = importlib.import_module ("dummy.dummy.test_dummy_pb2", package=None)
AttributeError: Assignment not allowed (no field "cuda_gpu_id" in protocol message object). #15168. amoghj8 opened this issue Dec 13, 2018 · 2 comments Assignees. Labels. caffe2. Comments. Copy link ... Assignment not allowed (no field "cuda_gpu_id" in protocol message object).``` ...
Protobuf python does not support assignment for message field, even it is a normal message field instead of oneof, "test.a =" is still not allowed. "You cannot assign a value to an embedded message field. Instead, assigning a value to any field within the child message implies setting the message field in the parent. " :
Attribute error: Assignment not allowed (no field "force_gpu_compatible" in protocol message object) #23. Closed comaniac opened this issue May 30, 2017 · 2 comments Closed Attribute error: Assignment not allowed (no field "force_gpu_compatible" in protocol message object) #23.
Prerequisites 1. The entire URL of the file you are using google\\protobuf\\internal\\python_message.py", line 868, in HasField 2. Describe the bug Trying to do a transfer learning using a custom data...
AttributeError: Assignment not allowed to repeated field "input" in protocol message object. AakankshaS October 20, 2020, 7:33am 2. Hi @letdivedeep, Request you to try the latest TRT release for fixed issues and improved performance. Thanks! letdivedeep ...
get the error: Assignment not allowed to composite field "task" in protocol message object. programatically I can import this module and assign values well enough, for
@brocaar I tried that before but same message: AttributeError: Assignment not allowed to message, map, or repeated field "object" in protocol message object. I think there is something else wrong. Maybe the word object is conflicting with something in python? Or something went wrong with the code generation for python.
AttributeError: Assignment not allowed to repeated field "input" in protocol message object. Code snippet : import tensorflow as tf from tensorflow.core.framework import types_pb2 import graphsurgeon as gs. graph = gs.DynamicGraph('frozen_graph.pb') Add new node. node = tf.NodeDef() node.op = 'Conv2DTranspose' node.name ...
AttributeError: Assignment not allowed to composite field "delete_from_row" in protocol message object. According to the protobuf docs, you should set a oneof field by setting one of the child fields. So a DeleteFromFamily mutation should be created this way:
I am getting this error: "AttributeError: Assignment not allowed to field "youtube_video_id" in protocol message object" (I posted the code bellow). The video was successfully uploaded to Youtube previously and I got the 11 character Youtube id that is required.
Fields in Protobuf messages have types, just as variables in programming languages like C or Java. As your id field has the type Identifier, you can only assign Identifier messages to it, not a simple int like 12. See the Protobuf Language Guide for details. -
Yes Lisa, we compiled with your project Caffe. And finally we found that issue was with the version of Ubuntu. We are using 16.x version. Below is the line we changed for the fix.
dynamic generation of proto objects and the error: Assignment not allowed to composite field. Tom Lichtenberg. Anton Danilov google.protobuf.message ¶ Contains an abstract base c
AttributeError: Assignment not allowed to composite field "tstamp" in protocol message object. Can anyone explain to me why this isn't working as I am nonplussed. python
Assignment not allowed to repeated field "conversions" in protocol message object #559. Closed panukuulu opened this issue Jan 11, 2022 · 3 comments Closed Assignment not allowed to repeated field "conversions" in protocol message object #559.
AttributeError: Assignment not allowed to repeated field "conversions" in protocol message object. My click_conversions looks like: ... I suppose one is not allowed to assign to repeated fields but I don't see how to proceed. I think appending with request.conversions.append(click_conversion) creates a malformed request.
AttributeError: Assignment not allowed to field "extra_fields" in the protocol message object. I don't want to specify the filed names in proto because my dict contains over 100 fields I just want to assign total dict to extra fields can anyone suggest how to insert dict to extra fields?
51. In general, a TensorFlow tensor object is not assignable, so you cannot use it on the left-hand side of an assignment. The easiest way to do what you're trying to do is to build a Python list of tensors, and tf.stack() them together at the end of the loop: outputs, states = rnn.rnn(lstm_cell, x, initial_state=initial_state,