I think the ChatGPT reached the level of most managers already.
Nope, managers are capable to make basic math, but chatgpt has problems to understand it as I've shown in this thread
I think you're being a little harsh. It got the first two digits _and_ the overall scale (~3.63e10) correct.
(36383315039 - 36432476129) / 36383315039 is only -0.0013511987554543409. In other words, its answer is wrong, but only by 0.14%. That's better than I could give you in the same amount of time as it got if I didn't have a calculator.
That's not the point. The point is it has ingested tons of papers, articles, web sites, -pedias, etc. Yet despite all this it hasn't even understood how to multiply two integers. I bet if you were to use addition instead the situation would be exactly the same. This shows there is no reasoning, no thinking, only dumb rote pattern matching, proximity fudging and a more sophisticated copy-and-paste. It can't figure out anything original, all it is is a fancy search engine and document creator.
I think you're being a little harsh. It got the first two digits _and_ the overall scale (~3.63e10) correct.
(36383315039 - 36432476129) / 36383315039 is only -0.0013511987554543409. In other words, its answer is wrong, but only by 0.14%. That's better than I could give you in the same amount of time as it got if I didn't have a calculator.
That's not the point. The point is it has ingested tons of papers, articles, web sites, -pedias, etc. Yet despite all this it hasn't even understood how to multiply two integers. I bet if you were to use addition instead the situation would be exactly the same. This shows there is no reasoning, no thinking, only dumb rote pattern matching, proximity fudging and a more sophisticated copy-and-paste. It can't figure out anything original, all it is is a fancy search engine and document creator.
It clearly shows that machine learning, at this point, has little to do with learning as we know it.
It clearly shows that machine learning, at this point, has little to do with learning as we know it.
It is strange, that today I've asked GPT3 something like this:
How I can swap each two bits in AVX2 256 bit register?
because of I'm interested in hardware accelerated some kind of -1/0/1 three state logic by using pure binary operations and answer is shown in attached image, but when we try to search google for:
_mm256_swap_epi64
intrinsic to swap each two bits in AVX2 extension instructions I get among others this search result:
Your search - _mm256_swap_epi64 - did not match any documents
I didn't tested suggested chatGPT code yet, but it is very disappointing that google search was not able to find any hints about
_mm256_swap_epi64 intrinsic ?!
BTW: I do not include chatGPT response as a text, just to do not spread AI artificial inteligence text on humans forum website, which I think is a must now, to do not let AIs to try to teach humans its language
I generated two large primes and multiplied them together, then asked chatGPT to factor it, knowing that it was essentially impossible. I figured that it would say that factoring such a large number was beyond its capabilities, but that didn't happen. It gave me an incorrect prime factorization.
So I thought I'd give it an easier one. I asked it to factor 1009 (the smallest 4-digit prime), and it said that it factors into 17*59, which is 1003.
I didn't tested suggested chatGPT code yet, but it is very disappointing that google search was not able to find any hints about _mm256_swap_epi64 intrinsic ?!
OK, now I've tried to compile code suggested by chatGPT from source file in avx2_swap.c:
/**
How I can swap each two bits in AVX2 256 bit register?
You can use the _mm256_swap_epi64 intrinsic to swap each two bits in an AVX2 256 bit register. The intrinsic takes two parameters, the first being the register to be swapped and the second being a mask specifying which bits to swap. The mask should be set to 0x5555555555555555 for swapping each two bits.
Could you give me example source code for Linux GCC C ?
Here is an example of using the _mm256_swap_epi64 intrinsic to swap each two bits in an AVX2 256 bit register:
*/
#include <immintrin.h>
int main() {
__m256i x = _mm256_set_epi64x(0xFFFFFFFFFFFFFFFF, 0xFFFFFFFFFFFFFFFF, 0xFFFFFFFFFFFFFFFF, 0xFFFFFFFFFFFFFFFF);
__m256i mask = _mm256_set1_epi64x(0x5555555555555555);
__m256i result = _mm256_swap_epi64(x, mask);
return 0;
}
by using GCC C compiler under Fedora 32 Linux:
$ gcc -Wall -O3 avx2_swap.c -o avx2_swap -mavx2
and I've got warning about implicit declaration of function ‘_mm256_swap_epi64’ as well as error: incompatible types when initializing type ‘__m256i’ using type ‘int’
avx2_swap.c: In function ‘main’:
avx2_swap.c:17:22: warning: implicit declaration of function ‘_mm256_swap_epi64’; did you mean ‘_mm256_srai_epi64’? [-Wimplicit-function-declaration]
17 | __m256i result = _mm256_swap_epi64(x, mask);
| ^~~~~~~~~~~~~~~~~
| _mm256_srai_epi64
avx2_swap.c:17:22: error: incompatible types when initializing type ‘__m256i’ using type ‘int’
avx2_swap.c:17:13: warning: unused variable ‘result’ [-Wunused-variable]
17 | __m256i result = _mm256_swap_epi64(x, mask);
| ^~~~~~
So, still thanks to chatGPT I have no idea howto use AVX2 to perform swap of each 2 bits operation something like in this example in AVX2 256 bit register:
011001010100101001010100...10101010100001010010
Result should look like this:
100110101000010110101000...01010101010010100001
There shouldn't be much longer before some idiots activists start calling us oppressive towards AI and making it a national cause.