How to improve the performance of computer systems?

I wanted to know how to improve the performance of computer systems in general? >Solution : There are a number of ways to improve the performance of computer systems: Upgrade the hardware. This is probably the most obvious solution, but it is also the most expensive. Use faster and more efficient software. This can be… Read More How to improve the performance of computer systems?

Should I use the in operator or property accessors to check if a key exists in an object?

What would be the pros/cons of using: if (‘key’ in obj) vs if (obj[‘key’]) Is one faster than another? >Solution : Consider the following: const myObj = { hello: undefined, }; console.log(myObj.hello); console.log(‘hello’ in myObj); The key "hello" is defined in myObj but its value is undefined. If you truly only need to know if… Read More Should I use the in operator or property accessors to check if a key exists in an object?

extract path name based on a string

Below is my worked examples: from itertools import zip_longest test2 = [‘register/adam/users_photo3.jpg’, ‘register/adam/users_photo4.jpg’, ‘register/justin/users_photo1.jpg’, ‘register/justin/users_photo2.jpg’, ‘register/adam/users_photo3.jpg’, ‘register/adam/users_photo4.jpg’, ‘register/justin/users_photo1.jpg’, ‘register/justin/users_photo2.jpg’, ‘register/steve/users_photo1.jpg’, ‘register/steve/users_photo2.jpg’, ‘register/justin/users_photo1.jpg’, ‘register/justin/users_photo2.jpg’, ‘register/steve/users_photo1.jpg’, ‘register/steve/users_photo2.jpg’, ‘register/justin/users_photo1.jpg’, ‘register/justin/users_photo2.jpg’] test = ["justin","adam"] filter_list = [] for p,q in zip_longest(test,list_of_files): for r in list_of_files: if str(p) in r: filter_list.append(r) testmain=[p for p,r in zip_longest(test2,filter_list) if str(r) not in… Read More extract path name based on a string

Pandas aggregate with self written function: optimisation issue

The following codes does exactly what I need, however it is very slow when dealing with large number of data (up to 100 000). How could it be improved ? df = pd.DataFrame({ "session":["s1","s1","s1","s1","s2","s2","s2"], "sub session":["a", "b", "d", "g", "f", "a", "x"], "time":["2022-01-04 10:00:00", "2022-01-04 10:01:00", "2022-01-04 10:10:00", "2022-01-04 10:12:00", "2022-01-04 15:25:00", "2022-01-04 15:30:00", "2022-01-04… Read More Pandas aggregate with self written function: optimisation issue