-
-
Notifications
You must be signed in to change notification settings - Fork 2
/
Companion
executable file
·3696 lines (3121 loc) · 149 KB
/
Companion
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Companion AI chatbot/moderation
# 2024 Copyright © Robert APM Darin
# All rights reserved unconditionally.
# This program uses both CamelCase (PascalCase) and snake_case. After 40+
# years of writting code in more languages then I remember, it really
# doesn't matter after a while. Even beginner programmers should get used
# to seeing multiple formats intermixed.
# Yes, I know Python is meant to be object oriented... When I started write the
# program, I never really expected it to grow beyound a fasination, let alone to where
# it is now. I litterally started with 2 functions (as seen in the Extra directory).
# As each itdea came, problems requiring solutions started to take the shape of
# functions. The program just grew and grew. At some point, I'll rewrite this... but
# for now, it works well and is quite fast.
# ***** IMPORTANT:
# This program uses ADMINISTRATOR priviledges.
# Forum and thread support is a strange approach. Threads are treated like
# channels, with a few extra bits. Webhooks, slowdown mode, edits, even the way
# message are sent into the thread is effected. Activitieshave to be tested and
# is it is a thread, most of the time, you have to pull the parent channel.
# Webhooks and slowdown mode, in particular, do NOT have separate
# functionalities of a channel, even though discord treats them like a channel.
### *** At some point, I'm going to really have to rewrite this. The forum/thread
### detection code is a stinking hot mess and scattered everywhere.
# Areas where a classifier can improve functionality (at a higher cost):
# Phone number detection would have a significantly lower lecel of false
# positives.
#
# Technical messages versus non-technical for varying the temperature of
# the conversation.
# Emotion scoring (classifier)
#
# Emotional scoring (major scale): 10 (love) to -10 (hatred) sets the
# overall tone of the AI towards the user. This can be translated the
# scale of a warm and inviting conversation to a cold and distant one.
# The system role could easily be adapted to act on this information.
# (Minor scale) Assuming 0 is neurtal and the starting point of all
# interactions, a monor scale could be used to evaluate each input from 1
# (warm) to -1 (cold) to give that input a score. The scale needs to
# adjust though to ensure the major scale doesn't climb too quickly in
# either direction. For example, if the emotional score (major scale) is
# 1, then the minor scale range needs to be 0.9 to -1.
# The minor scale is added or subtracted to/from the mahor scale to get the
# final emotion score of the bot at the last interaction. This will carry
# between channel towards the same user. The bot will greet the same user
# accross all channel with the same emotion resonance.
# Listed in order of when the service was added.
# added openai
# added ollama
# added together.ai Free models available.
# added cohere Trial key 1000 free requests per month
# added huggingface free key is 1000 requests per day, fragmented responses, 1
# respons coud=8 requests or more
# added anthropic token counting is a hot mess, but functional,
# no free credits, pay upfront
# added perplexity.ai No free credits, pay upfront, search oriented, not
# good for conversational, but searching is outstanding.
# added openrouter.ai Several free models limited to 200 requests per day,
# has just about every major provider, even some of the
# more difficult ones to build for. Paid models will have
# a premium for the normalization service. No direct model
# support like you would get directly from model vender
# (OpenAI, Anthropic, so on). Uses all OpenAI code. Token
# counting is a nighmare but the basic len/4 works
# reasonly well. toktoken() does NOT work.
# --> Not in a specific order, add support for the following engines (Maybe):
# DeepInfra https://deepinfra.com/pricing
# fireworks.ai
# Anyscale
# Replicate
# google.ai (genini) This is a hot mess to try to develop for. There
# are a serious cascading issues from message
# format to usage consideration... The API is
# free, but the developement process is hideous.
# Vertex AI Now GoogleAI
# AI21labs Seriously broken in multiple ways. API does not work.
# Disappointing becuse this really looked put
# together with as much expertise as Open AI, even
# directly addressing token counting upfront.
# Special USER commands: (Completed)
# %http Read URLs, YouTube transcripts, and PDFs
# %yttags Get YouTube video tage, if there are any.
# %Forget Tell the AI to forget the conversation in the current channel
# %AnagramSolver Solve Anagrams
# Developer/Admin only: (Completed)
# %PurgeRequests Empty the server request queue
# %CheckBot Check if the AI is allowed in a channel
# Needed functionality
# Purge memory that are older then X days automatically - completed
# Imposter detection - completed
# Auto slow mode - completed
# Anti-raid - completed
# Find a way to identify personal information: email, phone numbers, SSN, EIN,
# Credit Card numbers - completed
# Refine with an AI classifier - completed
# URL check (abuseIPDB) verifications. - completed
# Content Identification/Moderatation System (CIMS) - completed
# Build a classifier to handle the following: (Mark with reactions):
# "TOXICITY", "SEVERE_TOXICITY", "IDENTITY_ATTACK", "INSULT", "PROFANITY",
# "THREAT", "SEXUALLY_EXPLICIT", "FLIRTATION", "PERSONAL ATTACK",
# "INFLAMMATORY", "OBSCENE", "BULLYING"
# Anti nuke
# Ticket management
# Anti nudity verification for images.
# Disable 3rd party logging.
import warnings
warnings.filterwarnings('ignore')
import logging
logging.basicConfig(level=logging.CRITICAL)
logger=logging.getLogger('transformers')
logger.setLevel(logging.CRITICAL)
logger.handlers=[]
import sys
import os
import io
import copy
import itertools
import functools
import inspect
import traceback
from collections import deque
import datetime
import time
import random
import json
import string
import concurrent.futures
import threading
import urllib.request
from urllib.parse import urlparse
import requests
import socket
import re
import asyncio
import discord
from discord.ext import commands, tasks
import profanity_check as pc
import pdfplumber
import tiktoken
import openai
import ollama
import together
import cohere
from huggingface_hub import InferenceClient
import anthropic
import youtube_transcript_api
from transformers import AutoTokenizer
from googleapiclient.discovery import build
# Active version
Version="0.0.0.0.1200"
# The running name of the program. Must be global and NEVER changing.
RunningName=sys.argv[0]
# Persona base folder. This is where all personas are stored.
CompanionBase='/home/Companion'
CompanionStorage=f'{CompanionBase}/Personas'
MemoryStorage=f'{CompanionBase}/Servers/Memory'
LoggingStorage=f'{CompanionBase}/Servers/Logs'
ConfigStorage=f'{CompanionBase}/Servers/Config'
# For anagram solver
AnagramWordList=f'{CompanionBase}/AnagramSolver.txt'
# This is a list of domains that are scams, frauds, or malitiouc. It is
# exists, it is read and message that have links listed are removed.
# Requires the persona text file as well for responses.
CompanionWhitelist=f'{CompanionBase}/Companion.whitelist'
CompanionScamURLS=f'{CompanionBase}/Companion.scam-urls'
CompanionAutoFilter=f'{CompanionBase}/Companion.autofilter'
# `GuildQueueLock`: Ensures safe modification of the guild queue structure
# during request management across different servers.
GuildQueue=deque()
GuildQueueLock=threading.Lock()
GuildQueueTimeout=60
# `ResponseLock`: Prevents simultaneous modifications when queuing
# responses for processing.
ResponseLock=threading.Lock()
ResponseTimeout=60
# `DeleteLock`: Manages safe scheduling of message deletions without
# conflicts.
DeleteLock=threading.Lock()
DeleteTimeout=60
# `DisectLock`: Ensures accurate logging by preventing overlapping writes
# during message dissection.
DisectLock=threading.Lock()
DisectTimeout=60
# `BabbleLock`: Provides concurrency control per server while allowing parallelism
# across different servers.
LoggingLock=threading.Lock()
LoggingTimeout=60
# Each server must have it own lock to insure concurrency under heavy
# load. Requests per server are still sequential. This should prevent
# active servers from hogging resources from smaller or less active
# servers.
BabbleLock={}
BabbleTimeout=60
# Constants For auto slowmode. Needs to be dymanic in the future
SlowModeMultiplier=3 # Seconds for individual slow mode
SlowwModeCooldown=307 # 5 minutes/7 secords cooldown for slow mode adjustments
# Dictionary to store the last slow mode change time for each channel
LastSlowmodeChange={}
# For counting active users per chanel
ActiveUsers={}
# For active member joins. anti raid messures. Will be a multiplier for slowdown
# mode if above 1% of total users.
ActiveJoins={}
# This is required for the bot to work properly.
intents=discord.Intents.all()
intents.presences=True
intents.guilds=True
intents.messages=True
intents.message_content=True
intents.members=True
# Create a Discord client
client=discord.Client(intents=intents)
### Really need to make these files on disk and not global memory lists, for
### sharding/multi process management.
# List to stowre timed messages for deletion
DeleteList=[]
###
### Special functions/Decorators
###
# The `function_timer` decorator measures and prints the execution time of
# both synchronous and asynchronous functions. For synchronous functions,
# it records the start and end times, calculates the elapsed duration, and
# outputs the time taken. For asynchronous functions, it uses the same
# approach but accommodates `await` for proper timing. This decorator helps
# in profiling and performance monitoring by providing precise timing
# information for each wrapped function.
def function_timer(func):
@functools.wraps(func)
def sync_timer(*args, **kwargs):
start_time=time.perf_counter()
result=func(*args, **kwargs)
end_time=time.perf_counter()
elapsed_time=end_time - start_time
print(f"{func.__name__}: {elapsed_time:.6f} seconds")
return result
@functools.wraps(func)
async def async_timer(*args, **kwargs):
start_time=time.perf_counter()
result=await func(*args, **kwargs)
end_time=time.perf_counter()
elapsed_time=end_time - start_time
print(f"{func.__name__}: {elapsed_time:.6f} seconds")
return result
if inspect.iscoroutinefunction(func):
return async_timer
return sync_timer
# The `function_trapper` is a versatile decorator that wraps both
# synchronous and asynchronous functions to handle exceptions gracefully.
# When a wrapped function encounters an error, it logs the error details,
# including the function name and the line number where the error occurred,
# and then returns a predefined fallback result. This decorator ensures
# robust error handling while maintaining flexibility for both async and
# sync functions.
def function_trapper(failed_result=None):
def decorator(func):
if inspect.iscoroutinefunction(func): # Handle async functions
@functools.wraps(func)
async def async_wrapper(*args, **kwargs):
try:
return await func(*args, **kwargs)
except Exception as err:
tb=traceback.extract_tb(sys.exc_info()[-1])
errline=tb[-1].lineno
ErrorLog(f"{func.__name__}/{errline}: {err}")
return failed_result
return async_wrapper
else: # Handle sync functions
@functools.wraps(func)
def sync_wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except Exception as err:
tb=traceback.extract_tb(sys.exc_info()[-1])
errline=tb[-1].lineno
ErrorLog(f"{func.__name__}/{errline}: {err}")
return failed_result
return sync_wrapper
# Handle decorator usage with or without parentheses
if callable(failed_result): # Used without parentheses
return decorator(failed_result)
return decorator
###
### General file tools
###
# Cheap mkdir
def mkdir(fn):
if not os.path.exists(fn):
os.makedirs(fn,exist_ok=True)
# Read file into buffer
@function_trapper(None)
def ReadFile(fn,binary=False):
if os.path.exists(fn):
if binary:
cf=open(fn,'rb')
buffer=cf.read()
cf.close()
else:
cf=open(fn,'r')
buffer=cf.read().strip()
cf.close()
else:
buffer=None
return buffer
# Append a single line to an existing file
def AppendFile(fname,text):
fh=open(fname,'a+')
fh.write(text)
fh.close()
# Write file to disk
def WriteFile(fn,data):
cf=open(fn,'w')
cf.write(data)
cf.close()
# The `ReadFile2List` function processes the content of a file by reading
# it into a list, splitting the text into individual lines. It removes any
# blank lines and optionally converts all text to lowercase if specified.
# This ensures a clean and usable list of responses or data items from the
# file.
@function_trapper(None)
def ReadFile2List(fname,ForceLower=False):
# Something broke. Keep the responses in character
responses=ReadFile(fname).strip().split('\n')
while '' in responses:
responses.remove('')
if ForceLower==True:
responses=[item.lower() for item in responses]
return responses
# The `PickRandomResponse` function selects and returns a random response
# from a given file. It first reads the file into a list of responses, then
# randomly picks one. If the selected response is a special placeholder
# (enclosed by `{[(*` and `*)]}`), the function treats it as a reference to
# another file, reads the content of that file, and returns it instead.
# Otherwise, it directly returns the selected response.
@function_trapper(None)
def PickRandomResponse(fname):
responses=ReadFile2List(fname)
selected_response=random.choice(responses)
if selected_response.startswith('{[(*') and selected_response.endswith('*)]}'):
buffer=ReadFile(selected_response[4:-4].strip()).strip()
return buffer
return selected_response
###
### Functions for AbuseIPDB
###
# This set of functions works together to identify potentially harmful or
# suspicious links in messages. It checks if a link is on a safe list or a
# known scam list. If the link isn’t on either list, it looks up its background
# using a service called AbuseIPDB, which evaluates whether the link’s
# associated IP address has been reported for malicious activity. If the link
# is flagged as dangerous, it’s marked as unsafe. This helps ensure that
# harmful links can be quickly identified and dealt with.
@function_trapper
def ExtractURLs(text):
url_pattern=re.compile(r"https?://[^\s]+")
return url_pattern.findall(text)
@function_trapper
def Domain2IP(domain):
try:
ip_address=socket.gethostbyname(domain)
return ip_address
except Exception as err:
pass
return None
@function_trapper
def ExtractDomains(url):
parsed_url=urlparse(url)
return parsed_url.netloc
@function_trapper
def CheckAbuseIPDB(domain,token):
ipa=Domain2IP(domain)
if ipa==None:
return None,0
url=f"https://api.abuseipdb.com/api/v2/check"
params={"ipAddress": ipa}
headers={"Key": token, "Accept": "application/json"}
try:
response=requests.get(url, headers=headers, params=params)
response.raise_for_status()
data=response.json()
if data.get("data", {}).get("abuseConfidenceScore", 0)>0:
return True, data["data"]["abuseConfidenceScore"]
else:
return False, 0
except requests.exceptions.RequestException as e:
ErrorLog(f"Error checking AbuseIPDB: {e}")
return None, 0
@function_trapper
def CheckMessageURLs(gid,text):
# Check whitelist
whiteurls=ReadFile2List(CompanionWhitelist)
# Not in whitelist
scamurls=ReadFile2List(CompanionScamURLS)
# Extract URLs from the message
Tokens=ReadTokens(gid)
urls=ExtractURLs(text)
if urls:
for url in urls:
domain=ExtractDomains(url)
if domain in whiteurls:
return False
elif domain in scamurls:
return True
elif 'AbuseIPDB' in Tokens:
is_abusive, score=CheckAbuseIPDB(domain,Tokens['AbuseIPDB'])
if is_abusive:
return True
return False
###
### Emotional Score functions
###
# The `CalculateMinorScale` function translates an emotional score,
# referred to as the "MajorScale," into a refined range of minor emotional
# variations, represented by `MinorMax` and `MinorMin`. The MajorScale,
# constrained between -10 and 10, determines the sensitivity of these
# minor emotional shifts. Positive scores reduce the upper range of
# variability, while negative scores lessen the lower range, and a neutral
# score provides a balanced range. This ensures that emotional intensity
# is scaled proportionally, with the output reflecting subtle fluctuations
# formatted to two decimal places for precision.
@function_trapper(0)
def CalculateMinorScale(MajorScale):
# Ensure the MajorScale is within the allowed range
MajorScale=int(MajorScale)
if MajorScale>10:
MajorScale=10
elif MajorScale<-10:
MajorScale=-10
# Determine the range of the MinorScale based on the MajorScale
if MajorScale==0:
MinorMax=0.1
MinorMin=-0.11
elif MajorScale>0:
MinorMax=0.1 - 0.01 * MajorScale
MinorMin=-0.1
else: # MajorScale<0
MinorMax=0.1
MinorMin=-0.1 + 0.01 * abs(MajorScale)
# Format MinorMax and MinorMin to one decimal place
MinorMax=float(f"{MinorMax:.2f}")
MinorMin=float(f"{MinorMin:.2f}")
return MinorMax, MinorMin
# The `CalculateEmotionalScore` function evaluates and updates the
# emotional score (`escore`) of a conversation, using an external
# classifier to assess the sentiment of the provided text. Starting with a
# baseline `escore` (read from a file if available), it adjusts the
# scoring range based on the `CalculateMinorScale` function, which
# determines acceptable emotional variability. If the bot includes an
# emotional classifier, the function calculates an additional sentiment
# score (`mscore`) using the AI classifier and adjusts the `escore`
# accordingly. The updated score is written back to the file, and any
# placeholders in the response buffer (`buff`) are replaced with the new
# score, ensuring dynamic emotional feedback.
@function_trapper(None)
def CalculateEmotionalScore(fn,gid,bot,buff,text):
# Don't waste cycles if theres no defined classifier
if 'EmotionClassifier' not in bot:
return buff
# Figure out Emotional Score
escore=0
if os.path.exists(fn):
try:
escore=float(ReadFile(fn).strip())
except:
pass
lval,rval=CalculateMinorScale(escore)
mscore=0
if 'EmotionClassifier' in bot:
try:
mscore=float(asyncio.run(AIClassifier(gid,bot['EmotionClassifier'],text,FailResp=0,lval=lval,rval=rval)))
except:
pass
escore+=mscore
WriteFile(fn,f"{escore:.2f}\n")
if '{ESNEUTRAL}' in buff:
buff=buff.replace('{ESNEUTRAL}',f'{escore:.2f}')
return buff
###
### Random support functions
###
# The `NumberOnly` function checks if a given string can be considered a
# valid representation of a number, accounting for numeric characters,
# commas, periods, and various look-alike characters. It replaces common
# look-alike letters (like 'O' for '0' and 'I' for '1') with their numeric
# equivalents, trims whitespace, and validates the presence of digits while
# filtering out invalid characters. The function returns `True` if the
# string is a valid number, and `False` otherwise.
@function_trapper(None)
def NumberOnly(s):
# Replace common look-alikes with their numeric equivalents
look_alike_replacements={
'I': '1', # Uppercase 'i' as '1'
'l': '1', # Lowercase 'L' as '1'
'O': '0', # Uppercase 'O' as '0'
}
# Replace look-alikes in the input string
for look_alike, digit in look_alike_replacements.items():
s=s.replace(look_alike, digit)
s=s.strip().replace(' ','')
# Define valid characters, including look-alikes and numeric equivalents
valid_chars=set("0123456789.,") # Regular digits, comma, and period
look_alike_chars=set("٠١٢٣٤٥٦٧٨٩") # Arabic-Indic digits
full_width_digits=set("0123456789") # Full-width digits
# Combine all valid characters into one set
valid_chars.update(look_alike_chars)
valid_chars.update(full_width_digits)
# Check each character in the string
for char in s:
if char not in valid_chars:
return False
# Basic number validation: ensure the string isn't just commas or periods
if s.replace(",", "").replace(".", "").isdigit():
return True
return False
# Building the leet derivitives. This was a royal pain in the ass, but
# by doing so, if any user trying to bypass the edit detection can be
# persumed to have malicious intent.
# The BuildDerivatives function generates variations of a given word
# using common leetspeak substitutions. This is designed to catch
# altered forms of keywords that could be used to evade detection,
# ensuring that questions about sensitive topics (like age) are
# identified, even if disguised by replacing characters with
# similar-looking ones (e.g., "age" -> "4g3").
@function_trapper(None)
def BuildDerivitives(word):
substitutions={
'a': ['@','4'],
'e': ['3'],
'o': ['0'],
'l': ['1', '|', '!', 'i'],
'i': ['1', 'l', '|', '!', 'l'],
's': ['z', '$', '5'],
't': ['7', '+']
}
# Start the list
dList=[ word ]
# Forward in loop
for x in range(len(word)):
xword=list(word)
if xword[x] in substitutions.keys():
cList=substitutions[xword[x]]
for y in range(len(cList)):
xword[x]=cList[y]
nword=''.join(xword)
if nword not in dList:
dList.append(nword)
# Forward reset at beginning
xword=list(word)
for x in range(len(word)):
if xword[x] in substitutions.keys():
cList=substitutions[xword[x]]
for y in range(len(cList)):
xword[x]=cList[y]
nword=''.join(xword)
if nword not in dList:
dList.append(nword)
# Backwards
xword=list(word)
for x in range(len(word)-1,-1,-1):
if xword[x] in substitutions.keys():
cList=substitutions[xword[x]]
for y in range(len(cList)):
xword[x]=cList[y]
nword=''.join(xword)
if nword not in dList:
dList.append(nword)
# return the list of words back to user
return dList
# The BuildLeetList function creates a comprehensive list of leetspeak
# variations for a set of keywords. By applying the BuildDerivatives
# function to each word in the provided list, it generates leet-based
# variations that help detect keywords even when altered by character
# substitutions. This enables more reliable identification of disguised
# or intentionally modified words related to sensitive topics,
# supporting moderation efforts.
@function_trapper(None)
def BuildLeetList(side):
leetlist=[]
for i in range(len(side)):
leet=BuildDerivitives(side[i])
for j in range(len(leet)):
if leet[j] not in leetlist:
leetlist.append(leet[j])
return leetlist
# The `StripPunctuation` function removes all punctuation and high ASCII
# characters from a given text by replacing them with spaces. It achieves
# this by using a translation table that maps these characters to spaces,
# ensuring the returned text is cleaned and ready for further processing,
# free from unwanted symbols.
@function_trapper(None)
def StripPunctuation(text):
# Define punctuation and high ASCII characters
punctuation=string.punctuation
high_ascii_chars=''.join(chr(i) for i in range(128, 256))
# Create a translation table to map all punctuation and high ASCII characters to spaces
translation_table=str.maketrans({**dict.fromkeys(punctuation, ' '), **dict.fromkeys(high_ascii_chars, ' ')})
# Replace punctuation and high ASCII characters with spaces in the text
cleaned_text=text.translate(translation_table)
return cleaned_text
# The `jsonFilter` function cleans up a string by removing unwanted
# characters and formatting artifacts. It eliminates specific characters
# like newlines, tabs, and carriage returns, and optionally removes spaces
# based on a provided flag. This ensures the resulting string is stripped
# down and ready for reliable use in JSON processing or other structured
# tasks.
@function_trapper(None)
def jsonFilter(s,FilterSpace=True):
d=s.replace("\\n","").replace("\\t","").replace("\\r","")
if FilterSpace==True:
filterText='\t\r\n \u00A0'
else:
filterText='\t\r\n\u00A0'
for c in filterText:
d=d.replace(c,'')
return(d)
# The `GetWordList` function takes a block of text and processes it into a
# list of individual words in lowercase. It splits the text using spaces,
# ensuring any unnecessary empty entries, such as extra spaces, are
# removed. This streamlined approach prepares the words for consistent and
# clean usage in further operations.
@function_trapper(None)
def GetWordList(text):
words=text.lower().split()
return [word for word in words if word.strip()]
###
### Direct Companion functions
###
# The `ReadTokens` function ensures the bot has the necessary credentials
# to operate by locating and reading a specific file that stores API
# tokens. It carefully checks if the file exists, verifies its format, and
# ensures essential keys like the Discord token are present. If anything is
# missing or incorrect, the function logs the issue and stops the program
# to prevent errors during operation. By validating this information
# upfront, it ensures the bot can run securely and without interruptions.
@function_trapper({})
def ReadTokens(gid=None):
tokens={}
if gid==None:
tfile=RunningName+'.tokens'
else:
tfile=f"{ConfigStorage}/{gid}/{gid}.tokens"
if os.path.exists(tfile):
try:
tokens=json.loads(jsonFilter(ReadFile(tfile)))
except Exception as err:
ErrorLog("Error token file is not in JSON format. Please see README.md for new layout")
sys.exit(1)
else:
ErrorLog(f"Missing token file: {tfile}")
sys.exit(1)
if gid==None and 'Discord' not in tokens:
ErrorLog("The MUST be a Discord API reference in the tokens file")
sys.exit(1)
return tokens
# Raw dump. For diagnostics purposes to see the actual return response from the AI model.
def RawLog(text):
if LoggingLock.acquire(timeout=LoggingTimeout):
try:
mkdir(LoggingStorage)
fn=f'{LoggingStorage}/RAWDUMP.log'
fh=open(fn,'w')
fh.write(text)
fh.close()
except:
pass
LoggingLock.release()
# Logging
def WriteLog(gid,uid,channel,text):
if LoggingLock.acquire(timeout=LoggingTimeout):
try:
txt=text.replace('\n','\\n').replace('\r','\\r')
time=(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f'))
s=f'{time} {uid} {channel} {txt}\n'
dn=f'{LoggingStorage}/{gid}'
mkdir(dn)
fn=f'{dn}/{channel}.log'
AppendFile(fn,s)
except Exception as err:
print(f'LOG Broke: {err}')
pass
LoggingLock.release()
# Log errors
def ErrorLog(text):
if LoggingLock.acquire(timeout=LoggingTimeout):
try:
txt=text.replace('\n','\\n').replace('\r','\\r')
time=(datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S.%f'))
s=f'{time} {txt}\n'
mkdir(LoggingStorage)
fn=LoggingStorage+'/Errors.log'
AppendFile(fn,s)
# print to console
print(txt)
except:
pass
LoggingLock.release()
# This function is designed to detect harmful or inappropriate content in a
# message using an AI classifier. It identifies categories like toxicity,
# insults, threats, or sexually explicit language. If the AI detects an issue,
# the bot reacts to the message with a corresponding emoji to highlight the
# type of violation. This quick response helps flag problematic content for
# review or moderation.
# Add ability to check each setting for deletion and respond with
# "inappropriate content for this server" (badcontent) message.
@function_trapper("NONE")
async def CheckCIMS(gid,bot,message,nsfw=False):
cimsCategories={
"TOXICITY": "\u2623",
"SEVERE_TOXICITY": "\U0001F525",
"IDENTITY_ATTACK": "\U0001F534",
"INSULT": "\U0001F92C",
"PROFANITY": "\U0001F4A2",
"THREAT": "\u26A0",
"SEXUALLY_EXPLICIT": "\U0001F51E",
"FLIRTATION": "\U0001F48B",
"PERSONAL_ATTACK": "\U0001F44A",
"INFLAMMATORY": "\u26A1",
"OBSCENE": "\U0001F621",
"BIGOTRY": "\U0001F6AB",
"BULLYING": "\U0001F6D1" }
classifier=bot['CIMSClassifier']
text=f"Classify this: '{message.content}'"
AImatch=await AIClassifier(gid,classifier,text)
AImatch=AImatch.upper().replace('\n',' ').replace('.','').replace('*','').strip()
if AImatch!="NONE" and AImatch!='NO':
await ModeratorNotify(bot,message.guild,f"{message.author.name}/{message.author.id} flagged for {AImatch.lower()} in {message.channel.name}")
# Iterate through the toxic categories
# if category is in the classifier, delete the message
for category, emoji in cimsCategories.items():
if category in AImatch:
try:
if category in classifier:
await ModeratorNotify(bot,message.guild,f"Deleted message from {message.author.name}/{message.author.id} for {category.lower()} in {message.channel.name}")
if not message.author.bot:
await send_response(bot,message,PickRandomResponse(bot['CIMS']),delete=57)
await message.delete()
# Once we hit a delete wall, the message is done and
# gone.
return True
else:
await message.add_reaction(emoji)
except Exception as err:
ErrorLog(f"CIMS Failed to add reaction/delete message: {err}")
# Each reaction is unique; move to the next word after reacting
return False
# This function, `AIClassifier`, serves as a versatile interface to
# interact with different AI models from multiple providers. It accepts a
# `gid` (guild ID), a `classifier` configuration (which includes the AI
# engine type and model details), and a `text` input to be processed by
# the AI. The function first prepares the system and user messages by
# reading and formatting the provided instructions and text. It retrieves
# API tokens and settings from the `gid` and sets the relevant parameters,
# including a timeout and token limit. Based on the selected AI engine
# (e.g., OpenAI, Anthropic, Cohere, etc.), it sends the formatted request
# to the respective service and waits for a response. If an unrecognized
# engine is provided, it logs an error and returns a fallback response.
# The response from the AI is returned, completing the function's task.
@function_trapper('No')
async def AIClassifier(gid,classifier,text,FailResp="No",lval=None,rval=None,nsfw=False):
# NSFW channel can have different classifier that may be less aggressive
clinstr=classifier['Instructions']
if nsfw:
if os.path.exists(clinstr+'.nsfw'):
clinstr+='.nsfw'
msg=[]
instr=ReadFile(clinstr).replace('\n',' ').strip()
if lval:
instr=instr.replace('{lval}',f'{lval}')
if rval:
instr=instr.replace('{rval}',f'{rval}')
input=text.replace('\n',' ').replace("'","\'").replace('"',"'").strip()
msg.append({ "role":"system", "content":instr })
msg.append({ "role":"user", "content":input })
Tokens=ReadTokens(gid)
tout=classifier.get('Timeout',60)
# Split multiple entries
el=list(classifier['Engine'].split(',')) # Engine list
ec=len(el) # Engine list length
tl=list(classifier['MaxTokens'].split(',')) # Max Tokens list
mt=len(tl) # Max Tokens list length
ml=list(classifier['Model'].split(',')) # Model list
mc=len(ml) # Model list length
# The number of models MUST equal the number of engines. 1 model per engine
if ec!=mc!=mt:
ErrorLog(f"Broke AIClassifier ((models/Encoding)!=engines!=MaxTokens): {sys.exc_info()[-1].tb_lineno}/{err}")
return None
# Run through the engins/models until we have a response.
response=None
ecounter=0
model=None
while response==None and ecounter<ec:
provider=el[ecounter].lower()
model=ml[ecounter]
mts=int(tl[ecounter])
try:
if provider=='openai':
response=GetOpenAI(Tokens['OpenAI'],msg,model,0,0,tout)
elif provider=='openrouter':
response=GetOpenRouter(Tokens['OpenRouter'],msg,model,0,0,tout)
elif provider=='anthropic':
response=GetAnthropic(Tokens['Anthropic'],msg,model,0,0,tout)
elif provider=='togetherai':
response=GetTogetherAI(Tokens['TogetherAI'],msg,model,0,0,tout)
elif provider=='cohere':
response=GetCohere(Tokens['Cohere'],msg,model,0,0,tout)
elif provider=='ollama':
response=GetOllama(None,msg,model,0,0,tout,seed=0,mt=mts)
elif provider=='perplexity':
response=GetPerplexity(Tokens['Perplexity'],msg,model,0,0,tout)
elif provider=='huggingface':
response,stop=GetHuggingFace(Tokens['HuggingFace'],msg,model,0,0,tout)
else:
ErrorLog(f"Invalid classifier engine: {provider}")
return FailResp
if response==None:
ecounter+=1
else:
break
except Exception as err:
response=None
print(provider,model,response)
if response==None:
return FailResp
return response
# This function scans a piece of text to identify sensitive personal information, such
# as Social Security numbers, phone numbers, email addresses, credit card numbers, IP
# addresses, and Employer Identification Numbers (EINs). By checking for specific
# patterns associated with each type of information, it aims to detect and label any
# sensitive data it finds, like an email or a phone number, within the text. If it