Linux automation
My package dependence is not stable yet, I am considering switch the base python version back to static python2.7 or tinypy or micropython. (The current one is python3.10, I have a statically amd64 build in: https://gitlab.com/yingshaoxo/use_docker_to_build_static_python3_binary_executable)
But it is hard to make sure everything working fine 20 years later. Because I can't make sure 20 years later you can still buy a hardware that could allow you to run python2.7 or tinypy or old version of micropython. (Maybe you simply can't download a python2.7 binary or source code, or can't compile it since you don't have an old GCC.)
If you use new gcc or new python, it would end like: same source code, 50 years later after you compile it, the size increase to 500 times of its original binary file size. Or the new python size is 500 times bigger than 3MB. Or the performance drop down to 100 times of its original one. It is say, in old computer, you can run a 5KB software to finish a task in a very quick speed, in new computer, you have to use 500MB software to finish the same task, and slower.
Just do a search of "How to build a computer by using basic electronic units? Do not use any other chip or micro_controller." You will simply found there has no answers. Do you live in a free world?
python3 -m pip install "git+https://github.com/yingshaoxo/auto_everything.git@dev" --break-system-packages
# Use github on care, you may get banned(404) by saying the 'fuck' word: https://yingshaoxo.xyz/pictures/github/index.html
or
python3 -m pip install auto_everything --break-system-packages
# I think newer version of pypi and pip and pip package format has problems. Why they use a file ends with ".toml"? They think a "setup.py" python file can't be used to represent information? A python dict can't be used to represent information? Are they stupid? By using ".toml" file, I can't use old 8_version pip to install new packages, and I can't even upgrade pip itself because it can't find a "setup.py" file in new pip package. They made a big bug.
or
Just copy the 'auto_everything' sub_folder, then put it into the root folder of your project, so that you can directly import it.
# For amd64 linux machine, you can get a statically compiled python3.10 by doing following.
sudo su
curl -sSL https://gitlab.com/yingshaoxo/use_docker_to_build_static_python3_binary_executable/-/raw/master/install.sh?ref_type=heads | bash
What the fuck the
debian
is thinking of? Why we can't use pip to directly install a package anymore? debian/ubuntu linux branch want to force people to let their package go through a strict censorship process so that they can decide which software is good, which is not?
'export PATH=$PATH:/**/bin/' still working fine.
Where is the freedom? My dear people!
What is the difference between
pip install
andapt install
? Simply because pypi has more freedom?
2025: Actually, I found pypi is also not free any more, they let the package publish more and more complex and painful. (Without 2 Factor Verifying, you can't even login your pypi account.) I think you better create your own hardware and launch a new software distrubution platform. You would have totally freedom in your kindom if you created that world by yourself.
I even found VPS(remote server computer) is also not stable, they force you to update systems, in new system, they will always have a background process that is doing a monitor for your user's data. And consider of censorship from bad law reason, I think you should also create your own physical network.
"More code, more bug, more dependencies, more unstable, remember that. --- yingshaoxo"
sudo pip3 install auto_everything==3.9
or
poetry add auto_everything==3.9
# use poetry on care, it won't tell you the path where your package installed
from auto_everything.terminal import Terminal
t = Terminal()
reply = t.run_command('uname -a')
print(reply)
commands = """
sudo apt update
uname -a
"""
t.run(commands)
t.run_program('firefox')
t.run_py('your_file.py')
t.run_sh('your_file.sh')
status = t.is_running('terminal')
print(status)
t.kill('terminal')
from auto_everything.python import Python
py = Python()
py.fire(your_class_name)
py.make_it_global_runnable(executable_name="Tools")
Let's assume you have a file named Tools.py
:
from auto_everything.base import Python
py = Python()
class Tools():
def push(self, comment):
t.run('git add .')
t.run('git commit -m "{}"'.format(comment))
t.run('git push origin')
def pull(self):
t.run("""
git fetch --all
git reset --hard origin/master
""")
def undo(self):
t.run("""
git reset --mixed HEAD~1
""")
def reset(self):
t.run("""
git reset --hard HEAD^
""")
def hi(self):
print("Hi, Python!")
py.fire(Tools)
py.make_it_global_runnable(executable_name="MyTools")
After the first running of this script by python3 Tools.py hi
, you would be able to use MyTools
to run this script at anywhere within your machine:
yingshaoxo@pop-os:~$ MyTools hi
Hi, Python!
service Greeter {
rpc say_hello (hello_request) returns (HelloReply);
}
enum UserStatus {
OFFLINE = 0;
ONLINE = 1;
}
message hello_request {
string name = 1;
UserStatus user_status = 2;
repeated UserStatus user_status_list = 3;
}
message HelloReply {
string message = 1;
}
from auto_everything.develop import YRPC
yrpc = YRPC()
for language in ["python", "dart", "typescript"]:
yrpc.generate_code(
which_language=language,
input_folder="/home/yingshaoxo/CS/protocol_test/protocols",
input_files=["english.proto"],
output_folder="/Users/yingshaoxo/CS/protocol_test/generated_yrpc"
)
Here, we only use python to do the server part job.
from generated_yrpc.english_rpc import *
class NewService(Service_english):
async def say_hello(self, item: hello_request) -> HelloReply:
reply = HelloReply()
reply.message = item.name
return reply
service_instance = NewService()
run(service_instance, port="6060")
void main() async {
var client = Client_english(
service_url: "http://127.0.0.1:6060",
error_handle_function: (error_message) {
print(error_message);
},
);
var result = await client.say_hello(
item: hello_request(name: "yingshaoxo")
);
if (result != null) {
print(result);
}
}
from auto_everything.base import IO
io = IO()
io.write("hi.txt", "Hello, world!")
print(io.read("hi.txt"))
io.append("hi.txt", "\n\nI'm yingshaoxo.")
print(io.read("hi.txt"))
from auto_everything.disk import Disk
from pprint import pprint
disk = Disk()
files = disk.get_files(folder=".", type_limiter=[".mp4"])
files = disk.sort_files_by_time(files)
pprint(files)
from auto_everything.disk import Store
store = Store("test")
store.set("author", "yingshaoxo")
store.delete("author")
store.set("author", {"email": "[email protected]", "name": "yingshaoxo"})
print(store.get_items())
print(store.has_key("author"))
print(store.get("author", default_value=""))
print(store.get("whatever", default_value="alsjdasdfasdfsakfla"))
store.reset()
print(store.get_items())
encryption_and_decryption = EncryptionAndDecryption()
a_dict = encryption_and_decryption.get_secret_alphabet_dict("hello, world")
a_sentence = "I'm yingshaoxo."
encrypted_sentence = encryption_and_decryption.encode_message(a_secret_dict=a_dict, message=a_sentence)
print()
print(encrypted_sentence)
> B'i ybjdqahkxk.
decrypted_sentence = encryption_and_decryption.decode_message(a_secret_dict=a_dict, message=encrypted_sentence)
print(decrypted_sentence)
> I'm yingshaoxo.
jwt_tool = JWT_Tool()
secret = "I'm going to tell you a secret: yingshaoxo is the best."
a_jwt_string = jwt_tool.my_jwt_encode(data={"name": "yingshaoxo"}, a_secret_string_for_integrity_verifying=secret)
print(a_jwt_string)
> eyJhbGciOiAiTUQ1IiwgInR5cCI6ICJKV1QifQ==.eyJuYW1lIjogInlpbmdzaGFveG8ifQ==.583085987ba46636662dc71ca6227c0a
original_dict = jwt_tool.my_jwt_decode(jwt_string=a_jwt_string, a_secret_string_for_integrity_verifying=secret)
print(original_dict)
> {'name': 'yingshaoxo'}
fake_jwt_string = "aaaaaa.bbbbbb.abcdefg"
original_dict = jwt_tool.my_jwt_decode(jwt_string=fake_jwt_string, a_secret_string_for_integrity_verifying=secret)
print(original_dict)
> None
from auto_everything.web import Selenium
my_selenium = Selenium("https://www.google.com", headless=False)
d = my_selenium.driver
# get input box
xpath = '//*[@id="lst-ib"]'
elements = my_selenium.wait_until_elements_exists(xpath)
if len(elements) == 0:
exit()
# text inputing
elements[0].send_keys('\b' * 20, "yingshaoxo")
# click search button
elements = my_selenium.wait_until_elements_exists('//input[@value="Google Search"]')
if len(elements):
elements[0].click()
# exit
my_selenium.sleep(30)
d.quit()
We treat every char as an id or tensor element
In GPU based machine learning algorithm, you will often do things with [23, 32, 34, 54]
But now, it becomes ['a', 'b', 'c', 'd'], or ASCII number [0, 255].
long sequence (meaning group) -> long sequence (meaning group)
what you do -> 你干什么 It depends on -> 这取决于
(It depends on) (what you do) -> 这取决于 你干什么
meaning group can be get automatically, all you have to do is count continues_words appearance time. the more time a continuse_words appear, the more likely it is a meaning group
It all can be summaryed as "divide and conquer"
one char predict next char
two char predict next char
...
one word predict next word
two words predict next word
three words predict next word
...
when you use it, use it from bottom to top, use longest sequence to predict the next word first.
the more level you make, the more accurate it would be.
It is dict based next word generator, so the speed is super quick.
Don't expect this method will have high accuracy becuase the logic is simple, it can only be used for punctuate adding if you use previous words and next words to predict the center character.
#yingshaoxo: I could give you a template for general AI, if you ask 100000 people to work on one AI project, and do hard coding, each person write if else logic for 3 years, do not do repeat work. A general AI could be made if your have no dependence and not get spying in offline. Because that hard coding countless functions will cover almost all language level question and answer case in normal life.
from auto_everything.terminal import Terminal
terminal = Terminal()
global_memory_dict = {}
def update_global_dict_based_on_new_information(input_text):
global global_memory_dict
# find a way to simplify the input_text as pure json 5 type data
global_memory_dict.update(dict({"input_text": input_text}))
def natual_language_to_task_code(input_text):
global global_memory_dict
# You have to let the machine generate different code or algorithm for different input_text, so that each time the reply is different.
code = generate_machine_code_from_memory_and_input_text(global_memory_dict, input_text)
return code
def execute_code(code):
global global_memory_dict
import json
# For example, execute python code.
previous_info_code = f"""
memory_dict = json.loads('{json.dumps(global_memory_dict)}')
"""
result = terminal.run_python_code(previous_info_code + code)
return result
while True:
input_text = input("What you want to say? ")
update_global_dict_based_on_new_information("question:\n" + input_text)
code = natual_language_to_task_code(input_text)
result = execute_code(code)
print(result)
update_global_dict_based_on_new_information("my_answer_and_experiment_result:\n" + result)