• This is a new section being rolled out to attract people interested in exploring the origins of the universe and the earth from a biblical perspective. Debate is encouraged and opposing viewpoints are welcome to post but certain rules must be followed. 1. No abusive tagging - if abusive tags are found - they will be deleted and disabled by the Admin team 2. No calling the biblical accounts a fable - fairy tale ect. This is a Christian site, so members that participate here must be respectful in their disagreement.

Evolve 2025!

Clete

Truth Smacker
Silver Subscriber
Alright, I think this is going to be the final version!

I got it now where the program monitors the tries per second and dynamically adjusts the iterations per cycle to maintain as high a performance as possible. The tries per second (TPS) number it uses for this is a bit different than the one being displayed on the screen. The one being displayed is an average and it calculated by taking the total number of tries divided by the total time (Session or Overall respectively), whereas the TPS used to adjust the iterations per cycle (IPC) is taken of the previous one second of time. There is a minimum IPC of 2000 that is in the code. If you think your system would like fewer than that then just find every instance of "2000" in the code and change it to "1000" or whatever.

Anyway, the point is that it works and it is optimized as best as I can figure out to take full advantage of whatever system it is running on. The only thing the user has to do is play around with the number of processes to see how many your system can handle before losing performance.

Code:
import curses
import multiprocessing
import random
import time
import json
import os
import sys

# -------------------------------
# Configuration and Constants
# -------------------------------
TARGET = "abcdefghijklmnopqrstuvwxyz"  # Target alphabet
ALLOWED_CHARS = "abcdefghijklmnopqrstuvwxyz"  # Allowed characters
PERSISTENT_FILE = "overall_stats.json"       # File for persistent stats

MIN_IPC = 2000  # Minimum iterations per cycle (never drop below this)

# -------------------------------
# Persistence Functions
# -------------------------------
def load_overall_stats():
    default_data = {
        "overall_attempts": 0,
        "overall_elapsed": 0.0,
        "best_data": {"attempt": " " * len(TARGET), "match_count": 0},
        "score_distribution": [0] * (len(TARGET) + 1)
    }
    if os.path.exists(PERSISTENT_FILE):
        try:
            with open(PERSISTENT_FILE, "r") as f:
                data = json.load(f)
            # Ensure all keys are present
            for key, default in default_data.items():
                if key not in data:
                    data[key] = default
        except Exception:
            data = default_data
    else:
        data = default_data
    return data

def save_overall_stats(overall_attempts, overall_elapsed, best_data, score_distribution):
    data = {
        "overall_attempts": overall_attempts,
        "overall_elapsed": overall_elapsed,
        "best_data": dict(best_data),
        "score_distribution": list(score_distribution)
    }
    try:
        with open(PERSISTENT_FILE, "w") as f:
            json.dump(data, f)
    except Exception as e:
        sys.stderr.write(f"Error saving persistent data: {e}\n")

# -------------------------------
# Utility: Time Formatting
# -------------------------------
def format_time(sec):
    total_seconds = sec
    years = int(total_seconds // (365 * 24 * 3600))
    total_seconds %= (365 * 24 * 3600)
    days = int(total_seconds // (24 * 3600))
    total_seconds %= (24 * 3600)
    hours = int(total_seconds // 3600)
    total_seconds %= 3600
    minutes = int(total_seconds // 60)
    seconds = total_seconds % 60
    return f"{years} years, {days} days, {hours} hours, {minutes} minutes, {seconds:05.2f} seconds"

# -------------------------------
# Worker Process Function
# -------------------------------
def worker(p_target, session_attempts, best_data, paused, exit_event,
           score_distribution, distribution_lock, iterations_list, worker_index):
    iterations_per_cycle = 2000  # start at 2000 iterations per cycle
    target_length = len(p_target)
    local_best_match = best_data.get("match_count", 0)
    rnd = random.Random()

    # Variables for sliding-window TPS measurement (window length ~1 second)
    window_start = time.time()
    window_attempts = 0
    last_window_tps = None

    while not exit_event.is_set():
        if paused.value:
            time.sleep(0.1)
            continue

        start_cycle = time.time()
        local_batch_attempts = 0
        local_best_count = local_best_match
        # Local distribution count for this cycle (for scores 0..target_length)
        local_distribution = [0] * (target_length + 1)

        for _ in range(iterations_per_cycle):
            attempt = ''.join(rnd.choice(ALLOWED_CHARS) for _ in range(target_length))
            match_count = sum(1 for i in range(target_length) if attempt[i] == p_target[i])
            local_distribution[match_count] += 1
            local_batch_attempts += 1
            if match_count > local_best_count:
                local_best_count = match_count
                # Update global best if this is an improvement
                if local_best_count > best_data.get("match_count", 0):
                    best_data["attempt"] = attempt
                    best_data["match_count"] = local_best_count

        # Update total session attempts
        with session_attempts.get_lock():
            session_attempts.value += local_batch_attempts

        # Merge this cycle's distribution counts into the shared distribution
        with distribution_lock:
            for i in range(len(local_distribution)):
                # Ensure the list is long enough (it normally is)
                if i < len(score_distribution):
                    score_distribution[i] += local_distribution[i]
                else:
                    score_distribution.append(local_distribution[i])

        # --- New Sliding-Window TPS-Based IPC Adjustment ---
        window_attempts += local_batch_attempts
        current_time = time.time()
        window_duration = current_time - window_start

        if window_duration >= 1.0:
            # Compute instantaneous TPS for this window
            current_window_tps = window_attempts / window_duration

            if last_window_tps is not None:
                # If TPS increased by >5%, increase iterations by 10%
                if current_window_tps > last_window_tps * 1.05:
                    iterations_per_cycle = int(iterations_per_cycle * 1.1)
                # If TPS dropped by >5%, decrease iterations by 10%
                elif current_window_tps < last_window_tps * 0.95:
                    iterations_per_cycle = max(MIN_IPC, int(iterations_per_cycle * 0.9))
            last_window_tps = current_window_tps
            window_start = current_time
            window_attempts = 0

        iterations_list[worker_index] = iterations_per_cycle
        local_best_match = local_best_count

# -------------------------------
# Curses UI Main Function
# -------------------------------
def main(stdscr):
    global NUM_WORKERS

    # Curses initialization
    curses.curs_set(0)
    stdscr.nodelay(True)
    stdscr.keypad(True)
    curses.start_color()
    curses.use_default_colors()
    curses.init_pair(1, curses.COLOR_GREEN, -1)

    # Load persistent stats (including best_data and score_distribution)
    persistent_data = load_overall_stats()
    overall_attempts_loaded = persistent_data.get("overall_attempts", 0)
    overall_elapsed_loaded = persistent_data.get("overall_elapsed", 0.0)
    persistent_best = persistent_data.get("best_data", {"attempt": " " * len(TARGET), "match_count": 0})
    persistent_distribution = persistent_data.get("score_distribution", [0] * (len(TARGET) + 1))

    # Variables for tracking active (non-paused) runtime
    session_active_time = 0.0
    last_loop_time = time.time()

    manager = multiprocessing.Manager()
    # Initialize best_data and score_distribution with persistent values
    best_data = manager.dict(persistent_best)
    session_attempts = multiprocessing.Value('L', 0)
    paused = multiprocessing.Value('b', False)
    exit_event = multiprocessing.Event()

    score_distribution = manager.list(persistent_distribution)
    distribution_lock = multiprocessing.Lock()
    iterations_list = manager.list([2000] * NUM_WORKERS)

    workers = []
    for i in range(NUM_WORKERS):
        p = multiprocessing.Process(target=worker, args=(
            TARGET, session_attempts, best_data, paused, exit_event,
            score_distribution, distribution_lock, iterations_list, i
        ))
        p.start()
        workers.append(p)

    try:
        while True:
            stdscr.clear()
            current_time = time.time()
            dt = current_time - last_loop_time
            if not paused.value:
                session_active_time += dt
            last_loop_time = current_time

            # Handle key presses
            key = stdscr.getch()
            if key != -1:
                if key == 16:  # Ctrl+P toggles pause/resume
                    with paused.get_lock():
                        paused.value = not paused.value
                elif key == 17:  # Ctrl+Q quits the program
                    break

            session_elapsed = session_active_time
            overall_elapsed = overall_elapsed_loaded + session_elapsed

            with session_attempts.get_lock():
                session_attempts_val = session_attempts.value
            total_attempts = overall_attempts_loaded + session_attempts_val

            session_tps = session_attempts_val / session_elapsed if session_elapsed > 0 else 0
            overall_tps = total_attempts / overall_elapsed if overall_elapsed > 0 else 0

            avg_iterations = int(sum(iterations_list) / len(iterations_list)) if len(iterations_list) > 0 else 0

            # Build the display text:
            line = 0
            stdscr.addstr(line, 0, "Target Alphabet:")
            line += 1
            stdscr.addstr(line, 0, TARGET)
            line += 2

            stdscr.addstr(line, 0, "New Best Match:")
            line += 1
            best_attempt = best_data.get("attempt", " " * len(TARGET))
            for i, ch in enumerate(best_attempt):
                if i < len(TARGET) and ch == TARGET[i]:
                    stdscr.addstr(line, i, ch.upper(), curses.color_pair(1))
                else:
                    stdscr.addstr(line, i, ch)
            line += 2

            stdscr.addstr(line, 0, f"Total Attempts: {total_attempts:,}")
            line += 2

            stdscr.addstr(line, 0, f"Session Elapsed Time: {format_time(session_elapsed)}")
            line += 1
            stdscr.addstr(line, 0, f"Overall Elapsed Time: {format_time(overall_elapsed)}")
            line += 2

            stdscr.addstr(line, 0, f"Session Tries per Second: {session_tps:,.2f}")
            line += 1
            stdscr.addstr(line, 0, f"Overall Tries per Second: {overall_tps:,.2f}")
            line += 2

            stdscr.addstr(line, 0, "Score Distribution:")
            line += 1
            best_match = best_data.get("match_count", 0)
            for score in range(best_match + 1):
                stdscr.addstr(line, 0, f"{score} correct: {score_distribution[score]:,}")
                line += 1
            line += 1

            stdscr.addstr(line, 0, f"Number of Processes Running: {NUM_WORKERS}")
            line += 1
            stdscr.addstr(line, 0, f"Iterations per Cycle (avg): {avg_iterations}")
            line += 2

            status_str = "PAUSED" if paused.value else "RUNNING"
            stdscr.addstr(line, 0, f"Status: {status_str} (Pause/Resume: Ctrl+P, Quit: Ctrl+Q)")
            stdscr.refresh()
            time.sleep(0.1)
    finally:
        exit_event.set()
        for p in workers:
            p.join(timeout=1)
        overall_attempts_new = overall_attempts_loaded + session_attempts.value
        overall_elapsed_new = overall_elapsed_loaded + session_active_time
        save_overall_stats(overall_attempts_new, overall_elapsed_new, best_data, score_distribution)

# -------------------------------
# Main Entrypoint
# -------------------------------
if __name__ == '__main__':
    multiprocessing.freeze_support()  # For Windows support
    try:
        num_processes_input = input("Enter number of processes to run: ")
        try:
            num_processes = int(num_processes_input)
            if num_processes < 1:
                raise ValueError
        except ValueError:
            num_processes = min(24, multiprocessing.cpu_count())
            print(f"Invalid input. Defaulting to {num_processes} processes.")
        global NUM_WORKERS
        NUM_WORKERS = num_processes
        curses.wrapper(main)
    except KeyboardInterrupt:
        pass
 
Last edited:

Clete

Truth Smacker
Silver Subscriber
I may try to have Chat GPT program a totally different version that makes the same point.
I was thinking it would be cool to have it randomly generate a graphic version of an RNA molecule. The simplest known self-replicating molecule and thus the simplest known thing that could possibly be effected by natural selection is an RNA molecule with 102 nucleotides. There are four versions of the nucleotides (A, U, C, G) and so there would be 4 to the power of 102 possible combinations with only one that actually works. It's actually far more complex even than that because it matters how the molecule folds and the nucleotides can connect with eachother in various ways but 4^102 is already a much larger number than 26^26 which is what this alphabet based version is working on.

Thoughts?
 

Right Divider

Body part
@Clete FYI, the multiprocessing module has a function called cpu_count that will tell you how many cores the system has. You could use that as the default. Using a higher number is counter-productive.

My Linux system only has 4 cores.
 

Clete

Truth Smacker
Silver Subscriber
@Clete FYI, the multiprocessing module has a function called cpu_count that will tell you how many cores the system has. You could use that as the default. Using a higher number is counter-productive.

My Linux system only has 4 cores.
I have 12 cores and 24 threads but 8 processes is the optimal number for my system. Any more than that and I get slower performance out of this program. I have no idea why.

Also, my CPU gets to over 90° C when I run 8 processes! That seems pretty hot to I'll probably back it off to 6 or 7.
 

Clete

Truth Smacker
Silver Subscriber
FYI: I just discovered something that this latest version is doing that isn't what I want it to do. I think that it is only displaying the "Best New Match" and the score distribution numbers for the current session. This wipes out the over all best, which is counter productive! I'll get it fixed ASAP.
 

Right Divider

Body part
I have 12 cores and 24 threads but 8 processes is the optimal number for my system. Any more than that and I get slower performance out of this program. I have no idea why.
That's a little strange since it appears to be a purely computational problem.
Perhaps too many cores causes heating issues with your CPU.
 

Clete

Truth Smacker
Silver Subscriber
That's a little strange since it appears to be a purely computational problem.
Perhaps too many cores causes heating issues with your CPU.
It isn't really using more cores - per se. The multi-core thing that Chat GPT tried to implement wasn't nearly as fast as simply running more processes. If you tell it to run 8 processes, it effectively is running the program 8 times in parallel rather than running it once but utilizing more cores to do the math.
I'm completely out of my depth on this so I may not have said that accurately. I just know that it isn't aiming at trying to use more cores, even though it ends up doing so anyway.
 

Clete

Truth Smacker
Silver Subscriber
FYI: I just discovered something that this latest version is doing that isn't what I want it to do. I think that it is only displaying the "Best New Match" and the score distribution numbers for the current session. This wipes out the over all best, which is counter productive! I'll get it fixed ASAP.
Fixed it! The code shown in post #21 (and now also in post #1) is what I'm pretty sure is now the final version.

Next time either of you talks to Will Duffy, you should mention it and see if he wants to reactivate the link in that article with this modern code. It'll take 80 years for anyone to get to 16 correct letters.

(Well, I guess that depends on how many people are running it but still. It'll be a good long while in any case.)
 
Last edited:

Derf

Well-known member
Alright, I think this is going to be the final version!

I got it now where the program monitors the tries per second and dynamically adjusts the iterations per cycle to maintain as high a performance as possible. The tries per second (TPS) number it uses for this is a bit different than the one being displayed on the screen. The one being displayed is an average and it calculated by taking the total number of tries divided by the total time (Session or Overall respectively), whereas the TPS used to adjust the iterations per cycle (IPC) is taken of the previous one second of time. There is a minimum IPC of 2000 that is in the code. If you think your system would like fewer than that then just find every instance of "2000" in the code and change it to "1000" or whatever.

Anyway, the point is that it works and it is optimized as best as I can figure out to take full advantage of whatever system it is running on. The only thing the user has to do is play around with the number of processes to see how many your system can handle before losing performance.

Code:
import curses
import multiprocessing
import random
import time
import json
import os
import sys

# -------------------------------
# Configuration and Constants
# -------------------------------
TARGET = "abcdefghijklmnopqrstuvwxyz"  # Target alphabet
ALLOWED_CHARS = "abcdefghijklmnopqrstuvwxyz"  # Allowed characters
PERSISTENT_FILE = "overall_stats.json"       # File for persistent stats

MIN_IPC = 2000  # Minimum iterations per cycle (never drop below this)

# -------------------------------
# Persistence Functions
# -------------------------------
def load_overall_stats():
    default_data = {
        "overall_attempts": 0,
        "overall_elapsed": 0.0,
        "best_data": {"attempt": " " * len(TARGET), "match_count": 0},
        "score_distribution": [0] * (len(TARGET) + 1)
    }
    if os.path.exists(PERSISTENT_FILE):
        try:
            with open(PERSISTENT_FILE, "r") as f:
                data = json.load(f)
            # Ensure all keys are present
            for key, default in default_data.items():
                if key not in data:
                    data[key] = default
        except Exception:
            data = default_data
    else:
        data = default_data
    return data

def save_overall_stats(overall_attempts, overall_elapsed, best_data, score_distribution):
    data = {
        "overall_attempts": overall_attempts,
        "overall_elapsed": overall_elapsed,
        "best_data": dict(best_data),
        "score_distribution": list(score_distribution)
    }
    try:
        with open(PERSISTENT_FILE, "w") as f:
            json.dump(data, f)
    except Exception as e:
        sys.stderr.write(f"Error saving persistent data: {e}\n")

# -------------------------------
# Utility: Time Formatting
# -------------------------------
def format_time(sec):
    total_seconds = sec
    years = int(total_seconds // (365 * 24 * 3600))
    total_seconds %= (365 * 24 * 3600)
    days = int(total_seconds // (24 * 3600))
    total_seconds %= (24 * 3600)
    hours = int(total_seconds // 3600)
    total_seconds %= 3600
    minutes = int(total_seconds // 60)
    seconds = total_seconds % 60
    return f"{years} years, {days} days, {hours} hours, {minutes} minutes, {seconds:05.2f} seconds"

# -------------------------------
# Worker Process Function
# -------------------------------
def worker(p_target, session_attempts, best_data, paused, exit_event,
           score_distribution, distribution_lock, iterations_list, worker_index):
    iterations_per_cycle = 2000  # start at 2000 iterations per cycle
    target_length = len(p_target)
    local_best_match = best_data.get("match_count", 0)
    rnd = random.Random()

    # Variables for sliding-window TPS measurement (window length ~1 second)
    window_start = time.time()
    window_attempts = 0
    last_window_tps = None

    while not exit_event.is_set():
        if paused.value:
            time.sleep(0.1)
            continue

        start_cycle = time.time()
        local_batch_attempts = 0
        local_best_count = local_best_match
        # Local distribution count for this cycle (for scores 0..target_length)
        local_distribution = [0] * (target_length + 1)

        for _ in range(iterations_per_cycle):
            attempt = ''.join(rnd.choice(ALLOWED_CHARS) for _ in range(target_length))
            match_count = sum(1 for i in range(target_length) if attempt[i] == p_target[i])
            local_distribution[match_count] += 1
            local_batch_attempts += 1
            if match_count > local_best_count:
                local_best_count = match_count
                # Update global best if this is an improvement
                if local_best_count > best_data.get("match_count", 0):
                    best_data["attempt"] = attempt
                    best_data["match_count"] = local_best_count

        # Update total session attempts
        with session_attempts.get_lock():
            session_attempts.value += local_batch_attempts

        # Merge this cycle's distribution counts into the shared distribution
        with distribution_lock:
            for i in range(len(local_distribution)):
                # Ensure the list is long enough (it normally is)
                if i < len(score_distribution):
                    score_distribution[i] += local_distribution[i]
                else:
                    score_distribution.append(local_distribution[i])

        # --- New Sliding-Window TPS-Based IPC Adjustment ---
        window_attempts += local_batch_attempts
        current_time = time.time()
        window_duration = current_time - window_start

        if window_duration >= 1.0:
            # Compute instantaneous TPS for this window
            current_window_tps = window_attempts / window_duration

            if last_window_tps is not None:
                # If TPS increased by >5%, increase iterations by 10%
                if current_window_tps > last_window_tps * 1.05:
                    iterations_per_cycle = int(iterations_per_cycle * 1.1)
                # If TPS dropped by >5%, decrease iterations by 10%
                elif current_window_tps < last_window_tps * 0.95:
                    iterations_per_cycle = max(MIN_IPC, int(iterations_per_cycle * 0.9))
            last_window_tps = current_window_tps
            window_start = current_time
            window_attempts = 0

        iterations_list[worker_index] = iterations_per_cycle
        local_best_match = local_best_count

# -------------------------------
# Curses UI Main Function
# -------------------------------
def main(stdscr):
    global NUM_WORKERS

    # Curses initialization
    curses.curs_set(0)
    stdscr.nodelay(True)
    stdscr.keypad(True)
    curses.start_color()
    curses.use_default_colors()
    curses.init_pair(1, curses.COLOR_GREEN, -1)

    # Load persistent stats (including best_data and score_distribution)
    persistent_data = load_overall_stats()
    overall_attempts_loaded = persistent_data.get("overall_attempts", 0)
    overall_elapsed_loaded = persistent_data.get("overall_elapsed", 0.0)
    persistent_best = persistent_data.get("best_data", {"attempt": " " * len(TARGET), "match_count": 0})
    persistent_distribution = persistent_data.get("score_distribution", [0] * (len(TARGET) + 1))

    # Variables for tracking active (non-paused) runtime
    session_active_time = 0.0
    last_loop_time = time.time()

    manager = multiprocessing.Manager()
    # Initialize best_data and score_distribution with persistent values
    best_data = manager.dict(persistent_best)
    session_attempts = multiprocessing.Value('L', 0)
    paused = multiprocessing.Value('b', False)
    exit_event = multiprocessing.Event()

    score_distribution = manager.list(persistent_distribution)
    distribution_lock = multiprocessing.Lock()
    iterations_list = manager.list([2000] * NUM_WORKERS)

    workers = []
    for i in range(NUM_WORKERS):
        p = multiprocessing.Process(target=worker, args=(
            TARGET, session_attempts, best_data, paused, exit_event,
            score_distribution, distribution_lock, iterations_list, i
        ))
        p.start()
        workers.append(p)

    try:
        while True:
            stdscr.clear()
            current_time = time.time()
            dt = current_time - last_loop_time
            if not paused.value:
                session_active_time += dt
            last_loop_time = current_time

            # Handle key presses
            key = stdscr.getch()
            if key != -1:
                if key == 16:  # Ctrl+P toggles pause/resume
                    with paused.get_lock():
                        paused.value = not paused.value
                elif key == 17:  # Ctrl+Q quits the program
                    break

            session_elapsed = session_active_time
            overall_elapsed = overall_elapsed_loaded + session_elapsed

            with session_attempts.get_lock():
                session_attempts_val = session_attempts.value
            total_attempts = overall_attempts_loaded + session_attempts_val

            session_tps = session_attempts_val / session_elapsed if session_elapsed > 0 else 0
            overall_tps = total_attempts / overall_elapsed if overall_elapsed > 0 else 0

            avg_iterations = int(sum(iterations_list) / len(iterations_list)) if len(iterations_list) > 0 else 0

            # Build the display text:
            line = 0
            stdscr.addstr(line, 0, "Target Alphabet:")
            line += 1
            stdscr.addstr(line, 0, TARGET)
            line += 2

            stdscr.addstr(line, 0, "New Best Match:")
            line += 1
            best_attempt = best_data.get("attempt", " " * len(TARGET))
            for i, ch in enumerate(best_attempt):
                if i < len(TARGET) and ch == TARGET[i]:
                    stdscr.addstr(line, i, ch.upper(), curses.color_pair(1))
                else:
                    stdscr.addstr(line, i, ch)
            line += 2

            stdscr.addstr(line, 0, f"Total Attempts: {total_attempts:,}")
            line += 2

            stdscr.addstr(line, 0, f"Session Elapsed Time: {format_time(session_elapsed)}")
            line += 1
            stdscr.addstr(line, 0, f"Overall Elapsed Time: {format_time(overall_elapsed)}")
            line += 2

            stdscr.addstr(line, 0, f"Session Tries per Second: {session_tps:,.2f}")
            line += 1
            stdscr.addstr(line, 0, f"Overall Tries per Second: {overall_tps:,.2f}")
            line += 2

            stdscr.addstr(line, 0, "Score Distribution:")
            line += 1
            best_match = best_data.get("match_count", 0)
            for score in range(best_match + 1):
                stdscr.addstr(line, 0, f"{score} correct: {score_distribution[score]:,}")
                line += 1
            line += 1

            stdscr.addstr(line, 0, f"Number of Processes Running: {NUM_WORKERS}")
            line += 1
            stdscr.addstr(line, 0, f"Iterations per Cycle (avg): {avg_iterations}")
            line += 2

            status_str = "PAUSED" if paused.value else "RUNNING"
            stdscr.addstr(line, 0, f"Status: {status_str} (Pause/Resume: Ctrl+P, Quit: Ctrl+Q)")
            stdscr.refresh()
            time.sleep(0.1)
    finally:
        exit_event.set()
        for p in workers:
            p.join(timeout=1)
        overall_attempts_new = overall_attempts_loaded + session_attempts.value
        overall_elapsed_new = overall_elapsed_loaded + session_active_time
        save_overall_stats(overall_attempts_new, overall_elapsed_new, best_data, score_distribution)

# -------------------------------
# Main Entrypoint
# -------------------------------
if __name__ == '__main__':
    multiprocessing.freeze_support()  # For Windows support
    try:
        num_processes_input = input("Enter number of processes to run: ")
        try:
            num_processes = int(num_processes_input)
            if num_processes < 1:
                raise ValueError
        except ValueError:
            num_processes = min(24, multiprocessing.cpu_count())
            print(f"Invalid input. Defaulting to {num_processes} processes.")
        global NUM_WORKERS
        NUM_WORKERS = num_processes
        curses.wrapper(main)
    except KeyboardInterrupt:
        pass
I just don't like it importing curses. Sounds like it would destroy life instead of create it.

It should import blessings.
 

Clete

Truth Smacker
Silver Subscriber
It looks like it crashed on me...

View attachment 13704
The CPU's were cranking... and then nothing... I'll look into this more tomorrow.
I recommend running it with just 1 or 2 processes. If that works, move to to three and so on until you stop seeing an increase in tries per second.

If running just one or two processes still crashes, I don't have a clue what the issue might be.
 

Avajs

Active member
I may try to have Chat GPT program a totally different version that makes the same point.
I was thinking it would be cool to have it randomly generate a graphic version of an RNA molecule. The simplest known self-replicating molecule and thus the simplest known thing that could possibly be effected by natural selection is an RNA molecule with 102 nucleotides. There are four versions of the nucleotides (A, U, C, G) and so there would be 4 to the power of 102 possible combinations with only one that actually works. It's actually far more complex even than that because it matters how the molecule folds and the nucleotides can connect with eachother in various ways but 4^102 is already a much larger number than 26^26 which is what this alphabet based version is working on.

Thoughts?
why is there only one combination that works? Why do you need 102 nucleotides?
 

Clete

Truth Smacker
Silver Subscriber
why is there only one combination that works? Why do you need 102 nucleotides?
Because it is language based and it takes what it takes to encode the needed information.

It isn't impossible that some amount of truncation could occur without effecting function but it isn't likely because it is already so small and because the nucleotide sequence isn't the only factor. Truncating it would effect not just the sequence but also the way the molecule folds which directly impacts its function, including whether it can function at all. The folding as well as other structures in the molecule are not trivial to the molecule's function and it's ability to replicate itself and so the process of creating one by random chance is actually far more remote than the mere 1 in 4^102 that my proposed program would simulate.

The program running with the alphabet already proves the point because it is vastly easier than anything having to do with any RNA molecule. Proves that it's impossible for even a 26 letter alphabet to self-assemble, never mind a protein molecule that has way more parts, has to fold in just such a way and performs incredibly complex functions, not the least of which is to reproduce itself.
 

Avajs

Active member
Because it is language based and it takes what it takes to encode the needed information.

It isn't impossible that some amount of truncation could occur without effecting function but it isn't likely because it is already so small and because the nucleotide sequence isn't the only factor. Truncating it would effect not just the sequence but also the way the molecule folds which directly impacts its function, including whether it can function at all. The folding as well as other structures in the molecule are not trivial to the molecule's function and it's ability to replicate itself and so the process of creating one by random chance is actually far more remote than the mere 1 in 4^102 that my proposed program would simulate.

The program running with the alphabet already proves the point because it is vastly easier than anything having to do with any RNA molecule. Proves that it's impossible for even a 26 letter alphabet to self-assemble, never mind a protein molecule that has way more parts, has to fold in just such a way and performs incredibly complex functions, not the least of which is to reproduce itself.
What information are you encoding, I dont see why you need 102 nucleotides
You are using RNA to make protein, right? so ignoring any start/stop messages 102 nucleotides makes a protein with 34 amino acids. Why that limit? I suspect most proteins are much larger and yes, a different amino acid in the chain may change the structure but I dont know that every change is fatal to function. Are you confusing the replication of RNA with proteins? Proteins do not reproduce themselves, right? They are built in the cell by m-RNA running out to a ribosome that then constructs the protein, am I correct?
 

Clete

Truth Smacker
Silver Subscriber
What information are you encoding, I dont see why you need 102 nucleotides
I am not encoding anything and I'm not the one who came up with 102 nucleotides.

The fact is that the simplest known self-replicating molecule happens to be an RNA molecule that has 102 nucleotides. The information that encoded in it has to do with both it's function and it's self-replicating process.

You are using RNA to make protein, right?
Not necessarily. This particular RNA molecule is just the simplest know self-replicating molecule. Proteins are vastly more complex.

so ignoring any start/stop messages 102 nucleotides makes a protein with 34 amino acids. Why that limit?
Well, you don't get to ignore start/stop messages for one thing but even if you could, the point is that biology is a language based system. The base pairs in DNA and the nucleotides in RNA and even the way the molecules fold and various other aspects of the morphology of the molecule has meaning that is translated into functionalities of various kinds.

I suspect most proteins are much larger and yes, a different amino acid in the chain may change the structure but I dont know that every change is fatal to function.
Not every change but nearly every change. Even the changes that aren't fatal are pretty much never helpful. Very nearly all mutations create malfunctions, not enhancements. And it would only be the beneficial mutations that could possibly be "selected" for by natural selection and even that isn't guaranteed.

Are you confusing the replication of RNA with proteins?
No.

Proteins do not reproduce themselves, right? They are built in the cell by m-RNA running out to a ribosome that then constructs the protein, am I correct?
Yes, technically correct but sort of beside the point of this particular discussion. The production of proteins requires other proteins which themselves must be replicated. A process that is almost indescribably complex while also be irreducibly complex - meaning that you cannot make a protein at all without using other proteins to do it with.

That being said, the point here is only indirectly related to any specific biological process. The point here is about abiogensis, or how biology got started. The generation of something as simple as a twenty six letter alphabet is childishly simple by comparison to even the simplest of self-replicating molecules, never mind anything resembling real biological life such as a single celled organism, and even that would take several times longer than the age of the universe to ever happen.

In fact, at 750,000 attempts per second (which of course is ridiculously faster than a blob of nucleotides or amino acids could ever shuffle around and recombine themselves), it would take about 45.7 trillion times LONGER than the entire age of the universe to randomly generate the correct 26-letter sequence of the English alphabet.
 
Last edited:

Clete

Truth Smacker
Silver Subscriber
RNA does not need 26 letters.
TRUE! I love it! It's just one sentence but what makes your post so awesome is that it is an actual argument that is directly responsive to the point being discussed! It is just pathetically ridiculous how rare that is to see from anyone who doesn't already agree with most everything I say. Keep it up! (That isn't snark or sarcasm! Seriously! I'll take a relevant, single sentence argument over 500 word displays of stupidity every day of the week and twice on Sundays!)

The point you bring up is exactly why I was considering doing a different program. (I've pretty much decided that there's not much point in doing so because the current version makes the point quite well. If I do it at all, it'll be just to find out if I can pull off bringing the program into existence.)

Your point doesn't help the evolution side of things though because what this super simple version of RNA lacks in "letters", it more than makes up for in digits.

Evolve 2025 attempts to create, by random chance, the correct 26 letter English alphabet. There are two ways of doing this. I could have set it up such that once a letter had been used that it cannot be repeated (i.e. simply shuffling the letters like one would a deck of cards). If I had done that, the odds of getting the correct sequence would be 1 chance in 26! - That's not an exclamation point there, that 26 factorial, which is a truly huge number. Big enough of a number in fact that a random sequence generator would take longer than the age of the universe to generate it.

I, however, didn't do it that way. I did it such that any position in the sequence can be any one of the 26 letters of that alphabet. That makes the odds much much less that the correct sequence would ever be produced at random. It is 1 chance in 26^26, that's one chance in 26 to the 26th power, which is a mind bendingly humongous number. Far far bigger than 26 factorial. 15.3 billion times larger that 26!, in fact.

Why would I want to make it so much harder? Well, because even the harder way doesn't hold a candle to the difficulty nature would have in creating even the simplest form of a self-replicating molecule. This particular RNA molecule has 102 nucleotides in total but, as you say, there aren't very many varieties of nucleotides. In fact, in this particular RNA molecule there are exact four. That might seem like it makes it an easier hill to climb but, like I said, what it lacks in "letters" in make up for with digits. If you ignore all the other complicating factors (which are many) and all you went for was the proper sequence of 102 digits of four distinct "letters", the odds of getting the correct sequence is 1 chance in 4^102, which is just a crazy huge number. 4^102 is 4.18×10^24 (4.18 septillion) times larger than 26^26.

In short, like I said before, getting the English alphabet by pure random chance is total child's play compared to even getting sort of close to what it would take to produce even the simplest of self-replicating molecules. And just completely forget about proteins, never mind whole working machines made of various proteins, all of which are both constructed and replicated by other proteins which were themselves replicated in a similar way according to specific instructions encoded in chemical language with a four letter alphabet (DNA). It's just wild complexity upon wild complexity. It would literally be easier - far far easier - to shake a box of watch parts around and have a perfect Patek Philippe fall into place.
 
Last edited:
Top