-
Notifications
You must be signed in to change notification settings - Fork 74.7k
[TFLite] Add TABLE operator for LUT-based operators lowering #45342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TFLite] Add TABLE operator for LUT-based operators lowering #45342
Conversation
|
Thanks for your contribution. The new table operator will not be used for a part of the TF to TFL op lowering so it is hard to be used for the actual use cases. Are there any future plans for that? If now, how about just having this table operator as a custom op? |
184a9dc to
a24d523
Compare
|
Since this operator is somewhat in an experimental stage, how about adding this operator as a custom op as a first step until this custom op will resolve the unsupported cases or unlock the new opportunities with concrete examples? Because the builtin op addition will demand most of the android developers for the extra binary size requirement |
|
Sorry for my late answer. I think it would be easier for now to leave the PR on the side as there are still some discussions in progress regarding the advantages of a separate TABLE operator compared to generating the LUT inside the |
|
@Tessil Any update on this PR? Please. Thanks! |
|
Hi @gbaned , The PR is put in suspend for now as some discussions are still required internally and with some of the TFLite team members to check if we move forward with a separate TABLE operator or not. Sorry for that. |
|
It has been 14 days with no activity and the |
|
Thanks Tessil for making the change.
If I understand correctly, the change in softmax is only refactoring (so the common functions can support the new TABLE operator). Are there any tangible change in the implementation? |
|
Yes, the change is only a refactoring to adapt to the new more flexible |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Tessil.
|
@jianlijianli I fixed an implicit cast warning which caused an error in one of the CI. The PR will need re-approval, thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Tessil. Gtihub still shows build failure on Intel oneDNN but it appear to be unrelated.
|
Is it possible to remove the new custom op from the default op registration? Instead, we can keep it as a part of Most of common TFLite users do not requires a new table custom op for now and we would like to keep the TFLite library as compact as possible. |
| @@ -637,6 +637,7 @@ BUILTIN_KERNEL_SRCS = [ | |||
| "strided_slice.cc", | |||
| "sub.cc", | |||
| "svdf.cc", | |||
| "table.cc", | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to remove the new custom op from the default op registration? Instead, we can keep it as a part of custom_ops target under the tensorflow/lite/BUILD file.
Most of common TFLite users do not requires a new table custom op for now and we would like to keep the TFLite library as compact as possible.
|
I can remove it from the default registration and add it to the custom_ops library but out of curiosity are there specific builds generated with these custom ops? And how should we compile such builds (as the |
|
In bazel, it is possible to create a custom TFLite build target, depending on the custom_ops build target. This is the way to use the user generated custom ops. |
…rs lowering PiperOrigin-RevId: 387605434 Change-Id: I40ca1c45b169f5f801cc5663e856d7c20ee6f022
PiperOrigin-RevId: 388849112 Change-Id: Iaac0005b519ffe32f7f298ee80a14735bff9c9c2


Hi,
This PR adds a TFLite TABLE operator which is similar to the TABLE operator of the TOSA specification. The operator takes int8 or int16 inputs and look them up into the table associated to the operator to produce the outputs.
This operator can be used to lower non-linear quantized functions like the exponential to a LUT in the range of the quantized range. The following proof-of-concept commit quantizes the EXP operator by generating a LUT in the input range of the operator and replace the it by a TABLE operator in the exported TFLite model.
This PR only add the operator and don't provide any transformation yet, these will be part of a different PR. The
gen_lutfunction also has been extended to support int8->int8, int8->int16 and int16->int8 tables in addition of the previously supported int16->int16 table.Thibaut