This shit sucks. It's literally not possible to do multi threading in gtk...

This shit sucks. It's literally not possible to do multi threading in gtk. if you have two threads accessing the same textview it can't handle it. been looking at bug reports going all the way back to 2004 with people complaining about it and the only solutions offered then are now deprecated.

guess i'm gonna have to move to java and call it a day.

Attached: GTK_logo.svg.png (1200x1294, 66.96K)

Other urls found in this thread:

github.com/shiguredo/libwebrtc/blob/main/examples/peerconnection/client/linux/main_wnd.cc
docs.gtk.org/glib/main-loop.html
twitter.com/SFWRedditVideos

just use a command system?
UI code isn't multi threaded, not even vulkan is thread safe (but vulkan is more asynchronous than opengl, and offers build in ways of using threads to offload work away from the UI thread, like sorting draw commands, when in opengl you would need to create your own command structure before opengl 4).
You just need to do all the UI on one thread (usually the main thread), and just read a command queue that other threads create commands for, and execute that command which could respond with another command to the sending thread (this is essentially how QT signals sort of works, but QT also allows you to create signals that aren't thread safe to make single threaded code faster, since single threaded code can be faster than multi threaded).

You have absolutely no idea what the word "literally" means.

Cant do that. I’m writing a chat application that has a another thread handling the connection and that thread must update the UI textbuffer. Have no way to pass the recvBuffer to the main thread.

you mean
>too bad of a programmer to wrap & pass the recvBuffer to the main thread.

I don't use gtk, but when I use win32's API you essentially post a message to the UI loop (using void*), and that message would have a callback or whatever you want to define the command structure (it could also be a union), and that's it.
I believe for gtk it's called g_idle_add, this is from this example:
github.com/shiguredo/libwebrtc/blob/main/examples/peerconnection/client/linux/main_wnd.cc

Use Qt. It's just better and it just werks

>Have no way to pass the recvBuffer to the main thread.
You clearly don't really know what you're talking about. Read what the other user wrote and reflect on it until you understand it. If you can't understand it, try making a simpler program without multiple threads.

>if you have two threads accessing the same textview it can't handle it.
Amateur mistake. Not a single UI system supports that.

>UI thread
>Multithreading
Nigga really?

Can’t do that. The connection thread has to update the UI often and probably rapidly in the event of mutiple connections. It must update the UI immediately. It can’t simply pass data to the main thread via a global buffer and and then wait for user input on the UI to eventually push that buffer to the texview. Even if it did the buffer would be holding ancient information.

Dude. Even e-sport games where every millisecond is precious, still wait until it's time to render a frame. There's no such thing as "instant".

>what is a mutex
Retard

I'm curious, user. Do you really believe your problem is so bizarre and unusual that the GTK developers must have never encountered it? Do you believe your application is the only one in the world that needs to emit a UI event from outside the main thread?

>It can’t simply pass data to the main thread via a global buffer and and then wait for user input on the UI to eventually push that buffer to the texview. Even if it did the buffer would be holding ancient information.
wtf are you talking about, isn't this supposed to be a chat application?
it's as simple as this:
>net thread receives message packet
>net thread malloc's a event + buffer to send to the gtk main loop (the event could be a union or callback).
>ui thread receives event, updates the UI (push the text into the bottom of the scrolling text box, move the scroll to the bottom), then free's the event.
>if you want to send a message from the ui thread, you use a separate UI widget, when enter or a button is pressed, then you use a thread safe queue to the net thread and the net thread will poll the queue for packets to send, if you are using a blocking API for reading packets (stupid), I guess you could just send a packet directly from the ui thread, but this won't work with UDP due to possible packet drop, and with TCP you still should prefer batching commands into packets because you probably want to disable the buffering algorithm that TCP has for getting lower latency, but if you do that you probably want to properly batch your TCP commands so that the congestion isn't bad, because the whole point of buffering is to help with network congestion (aka you don't have enough internet to send or receive).

i created my own mutex without using a mutex. i just set 2 global variables and used while loops to make what i need to run in a loop until the other function competes and changes the variable. hahehehehehahhaehhee

using atomics instead of a mutex will make valgrind bug out (implying that you are using linux or wsl2).
if you want to debug race conditions you need to use a mutex (or use the PHD level atomic hint's that valgrind's sdk offers).
if you aren't using atomics, it's very likely that your code will break when you try to build it with optimizations enabled (because the variable will be optimized away because side effects cannot be detected).

If you want async calls simply wrap the functions you want to call asynchronous with a call queue that stores the the arguments passed. Then have the main thread linearly go through the arguments in the queue. All you would need is a mutex on the queue. If one process is hogging the queue too much simply do a queue swap system where you have two queues and swap back and forth. You'll still have race conditions, but if the actions you are performing are atomic you shouldn't have an issue. If you pass timestamps with your args you can use that to help order them.

you cannot assume anything is atomic, because it isn't defined by the standard.
for example in x86, word sized reads and writes
are atomic, but in ARM, it's not atomic, the change will not be visible to other threads without a special instruction to essentially make the reader and writer flush the L1 and L2 cache to the L3 cache because L1 and L2 cache aren't visible to other threads, and also it's possible that optimizations could make variables stored in registers instead of in memory, which is also a problem in x86, but with ARM you can't disable optimizations or use volatile (but msvc makes volatile atomic which is nonstandard)

I mean really depends on OP version of fault tolerance. If this was RTOS I'd be more worried about it, but I can't imagine text based UI would have that degree of fault intolerance. I mean I was assuming you would have a range of the last 1000 characters processed on the screen. If you are really burning through messages to the point the queues are falling behind you could make it a list and calculate time deltas and if the delta grows too big jump an arbitrary number of space up in the list to catch up. Then create a new message sink list and clear the current list once its empty or grown too out of sink. I wouldn't worry too much about caches because this is using GTK. If this needs that level of speed that sounds more like an andrino project that shouldn't even have an OS. It should just a single application running over the entire thing. When I say atomic I mean when leaving method it leaves it in a state not requiring additional context from the process that originally called it. Basically a pure function.

2 seconds with Google:
docs.gtk.org/glib/main-loop.html
>The main event loop manages all the available sources of events for GLib and GTK applications. These events can come from any number of different types of sources such as file descriptors (plain files, pipes or sockets) and timeouts. New types of event sources can also be added using g_source_attach().
>To allow multiple independent sets of sources to be handled in different threads, each source is associated with a GMainContext. A GMainContext can only be running in a single thread, but sources can be added to it and removed from it from other threads. All functions which operate on a GMainContext or a built-in GSource are thread-safe.
So not only can you clearly have a background thread that receives chat messages and turns them into events that the main thread handles by updating the UI, you could likely scrap the second thread entirely and have the network socket itself generate events that you interpret as chat messages or whatever on the main thread.

g_idle_add (update_ui, recvBuffer);

Any reason why not?

I didn't read your post fully, I thought you were talking about swapping the queues using a "atomic" volatile object.
On x86 stores are atomic, so if you wrote memory into an array, and then set the "atomic" signal, the other thread can assume the memory in the array is fully written, while on ARM you cannot assume that, but you can use ARM volatile for signals, you just cannot assume that other data around the volatile is going to be visible to other threads (this is because inside the CPU the actual order that operations are done in are actually completely wrong to how it looks like in assembly, where essentially instead of waiting for memory from L3 memory to be loaded into the register, the cpu will actually move to the next instruction until the register is actually being accessed for a conditional operation or whatever)