How to integrate User Defined Function with excel formula

Advertisements The following User Defined Function is referencing previous excel sheet. Function PrevSheet() Application.Volatile PrevSheet = Worksheets(ActiveSheet.Index – 1).Name End Function The following formula is okey. =Sheet1!A1 The following formula is NOT okey. The following formula needs to repaired. =PrevSheet()!A1 >Solution : Expansion on my comment to do this all in a UDF: Function PrevSheetVal(PrevSheetRow… Read More How to integrate User Defined Function with excel formula

What's with these returning values in User Defined Functions in C?

Advertisements PROGRAM TO PRINT REVERSE OF THE USER INPUT INTEGER IN C #include<stdio.h> int reverse(int x) { int lastdigit; while(x>0) { lastdigit=x%10; printf("%d",lastdigit); x=x/10; } return lastdigit; } int main() { int num,fn; printf("Enter any Number: "); scanf("%d",&num); fn=reverse(num); return 0; } Here in this code at the function definition part, when I run with… Read More What's with these returning values in User Defined Functions in C?

What's with these returning values in User Defined Functions in C?

Advertisements PROGRAM TO PRINT REVERSE OF THE USER INPUT INTEGER IN C #include<stdio.h> int reverse(int x) { int lastdigit; while(x>0) { lastdigit=x%10; printf("%d",lastdigit); x=x/10; } return lastdigit; } int main() { int num,fn; printf("Enter any Number: "); scanf("%d",&num); fn=reverse(num); return 0; } Here in this code at the function definition part, when I run with… Read More What's with these returning values in User Defined Functions in C?

pyspark udf function storing incorrect data despite function producing correct result

Advertisements So I have this weird issue. I’m using a huge dataset that has dates and times in it represented by a single string. This data can be easily converted using datetime.strptime(), but the problem is the data is so huge, I need to use pyspark to convert it. No problem, I thought, I scoured… Read More pyspark udf function storing incorrect data despite function producing correct result

Do User-Defined Scalar Valued Functions still prevent parallelism?

Advertisements I’m currently reading a book about SQL Server 2014. It claims that User-Defined Scalar Valued Functions prevent parallelism for the entire plan that they appear in. Is this still true in later versions? >Solution : If the function is not inlined it still prevents parallelism. TSQLUserDefinedFunctionsNotParallelizable still exists as a NonParallelPlanReason in the execution… Read More Do User-Defined Scalar Valued Functions still prevent parallelism?

Tune a SQL scalar function that does simple operations a lot of times

Advertisements I have a column of data type image those values look similar to this: 0x…32004200460054004F00560031004800360053005100380031006500300043004300550055003500350034003300370038005600420047003400310047004F004A00460030004C003100370030005200380054003600370045004F00320032004E005600360039004C00… I have to use only a certain sequence of the image value. And I need to convert it to a character like data type (VARCHAR?) like this: 2BFTOV1H6SQ81e0CCUU554378VBG41GOJF0L170R8T67EO22NV69L The convertion is done as follows: Ommit every second pair (it’s… Read More Tune a SQL scalar function that does simple operations a lot of times

What is the most elegant way to apply custom function to PySpark dataframe with multiple columns?

Advertisements I need to create new fields based on three dataframe fields. This works but it seems inefficient: def my_func(very_long_field_name_a, very_long_field_name_b, very_long_field_name_c): if very_long_field_name_a >= very_long_field_name_b and very_long_field_name_c <= very_long_field_name_b: return ‘Y’ elif very_long_field_name_a <= very_long_field_name_b and very_long_field_name_c >= very_long_field_name_b: return ‘Y’ else: return ‘N’ import pyspark.sql.functions as F my_udf = F.udf(my_func) df.withColumn(‘new_field’, my_udf(df.very_long_field_name_a, df.very_long_field_name_b,… Read More What is the most elegant way to apply custom function to PySpark dataframe with multiple columns?

Defined function is miscounting number of upper/lowercase characters

Advertisements def letcheck(a): upper = 0 lower = 0 for letter in a: if a.islower(): lower += 1 else: upper += 1 print(‘The number of lowercase letters is’, lower) print(‘The number of uppercase letters is’, upper) return letcheck(‘My name is Slugcat’) Hi there. I imagine this is very basic for most of you so forgive… Read More Defined function is miscounting number of upper/lowercase characters